Nov 21 20:06:33 crc systemd[1]: Starting Kubernetes Kubelet... Nov 21 20:06:33 crc restorecon[4684]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:34 crc restorecon[4684]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 20:06:34 crc restorecon[4684]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 21 20:06:35 crc kubenswrapper[4727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 20:06:35 crc kubenswrapper[4727]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 21 20:06:35 crc kubenswrapper[4727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 20:06:35 crc kubenswrapper[4727]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 20:06:35 crc kubenswrapper[4727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 21 20:06:35 crc kubenswrapper[4727]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.236042 4727 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242488 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242712 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242722 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242732 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242741 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242752 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242762 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242772 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242813 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242823 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242833 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242844 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242854 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242863 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242874 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242882 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242893 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242905 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242915 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242926 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242937 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242949 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.242996 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243010 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243021 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243029 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243038 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243046 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243054 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243088 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243100 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243110 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243118 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243127 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243135 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243143 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243153 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243163 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243173 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243183 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243194 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243202 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243210 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243219 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243227 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243235 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243244 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243252 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243263 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243274 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243284 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243294 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243304 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243314 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243325 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243336 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243346 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243355 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243363 4727 feature_gate.go:330] unrecognized feature gate: Example Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243371 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243380 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243390 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243400 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243410 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243419 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243432 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243443 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243452 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243461 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243470 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.243478 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243676 4727 flags.go:64] FLAG: --address="0.0.0.0" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243708 4727 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243727 4727 flags.go:64] FLAG: --anonymous-auth="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243741 4727 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243754 4727 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243763 4727 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243781 4727 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243793 4727 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243802 4727 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243814 4727 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243823 4727 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243833 4727 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243842 4727 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243851 4727 flags.go:64] FLAG: --cgroup-root="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243860 4727 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243869 4727 flags.go:64] FLAG: --client-ca-file="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243878 4727 flags.go:64] FLAG: --cloud-config="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243887 4727 flags.go:64] FLAG: --cloud-provider="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243896 4727 flags.go:64] FLAG: --cluster-dns="[]" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243908 4727 flags.go:64] FLAG: --cluster-domain="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243916 4727 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243925 4727 flags.go:64] FLAG: --config-dir="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243934 4727 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.243944 4727 flags.go:64] FLAG: --container-log-max-files="5" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244013 4727 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244022 4727 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244032 4727 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244041 4727 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244051 4727 flags.go:64] FLAG: --contention-profiling="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244060 4727 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244070 4727 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244079 4727 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244088 4727 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244101 4727 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244110 4727 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244120 4727 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244129 4727 flags.go:64] FLAG: --enable-load-reader="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244138 4727 flags.go:64] FLAG: --enable-server="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244147 4727 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244159 4727 flags.go:64] FLAG: --event-burst="100" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244168 4727 flags.go:64] FLAG: --event-qps="50" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244177 4727 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244188 4727 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244197 4727 flags.go:64] FLAG: --eviction-hard="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244208 4727 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244217 4727 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244226 4727 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244236 4727 flags.go:64] FLAG: --eviction-soft="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244245 4727 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244253 4727 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244263 4727 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244272 4727 flags.go:64] FLAG: --experimental-mounter-path="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244281 4727 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244291 4727 flags.go:64] FLAG: --fail-swap-on="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244300 4727 flags.go:64] FLAG: --feature-gates="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244311 4727 flags.go:64] FLAG: --file-check-frequency="20s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244321 4727 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244330 4727 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244340 4727 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244349 4727 flags.go:64] FLAG: --healthz-port="10248" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244358 4727 flags.go:64] FLAG: --help="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244368 4727 flags.go:64] FLAG: --hostname-override="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244378 4727 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244387 4727 flags.go:64] FLAG: --http-check-frequency="20s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244396 4727 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244406 4727 flags.go:64] FLAG: --image-credential-provider-config="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244415 4727 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244424 4727 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244433 4727 flags.go:64] FLAG: --image-service-endpoint="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244442 4727 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244451 4727 flags.go:64] FLAG: --kube-api-burst="100" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244460 4727 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244471 4727 flags.go:64] FLAG: --kube-api-qps="50" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244480 4727 flags.go:64] FLAG: --kube-reserved="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244489 4727 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244498 4727 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244507 4727 flags.go:64] FLAG: --kubelet-cgroups="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244516 4727 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244527 4727 flags.go:64] FLAG: --lock-file="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244535 4727 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244545 4727 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244554 4727 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244601 4727 flags.go:64] FLAG: --log-json-split-stream="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244611 4727 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244621 4727 flags.go:64] FLAG: --log-text-split-stream="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244631 4727 flags.go:64] FLAG: --logging-format="text" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244642 4727 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244662 4727 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244682 4727 flags.go:64] FLAG: --manifest-url="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244695 4727 flags.go:64] FLAG: --manifest-url-header="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244712 4727 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244724 4727 flags.go:64] FLAG: --max-open-files="1000000" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244738 4727 flags.go:64] FLAG: --max-pods="110" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244753 4727 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244765 4727 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244776 4727 flags.go:64] FLAG: --memory-manager-policy="None" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244787 4727 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244802 4727 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244815 4727 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244826 4727 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244856 4727 flags.go:64] FLAG: --node-status-max-images="50" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244868 4727 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244879 4727 flags.go:64] FLAG: --oom-score-adj="-999" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244892 4727 flags.go:64] FLAG: --pod-cidr="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244903 4727 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244922 4727 flags.go:64] FLAG: --pod-manifest-path="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244932 4727 flags.go:64] FLAG: --pod-max-pids="-1" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244944 4727 flags.go:64] FLAG: --pods-per-core="0" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.244998 4727 flags.go:64] FLAG: --port="10250" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245013 4727 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245024 4727 flags.go:64] FLAG: --provider-id="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245035 4727 flags.go:64] FLAG: --qos-reserved="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245046 4727 flags.go:64] FLAG: --read-only-port="10255" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245058 4727 flags.go:64] FLAG: --register-node="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245072 4727 flags.go:64] FLAG: --register-schedulable="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245083 4727 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245105 4727 flags.go:64] FLAG: --registry-burst="10" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245117 4727 flags.go:64] FLAG: --registry-qps="5" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245128 4727 flags.go:64] FLAG: --reserved-cpus="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245140 4727 flags.go:64] FLAG: --reserved-memory="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245154 4727 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245166 4727 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245177 4727 flags.go:64] FLAG: --rotate-certificates="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245189 4727 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245201 4727 flags.go:64] FLAG: --runonce="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245213 4727 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245225 4727 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245237 4727 flags.go:64] FLAG: --seccomp-default="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245247 4727 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245258 4727 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245271 4727 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245282 4727 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245294 4727 flags.go:64] FLAG: --storage-driver-password="root" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245306 4727 flags.go:64] FLAG: --storage-driver-secure="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245318 4727 flags.go:64] FLAG: --storage-driver-table="stats" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245329 4727 flags.go:64] FLAG: --storage-driver-user="root" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245340 4727 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245351 4727 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245363 4727 flags.go:64] FLAG: --system-cgroups="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245374 4727 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245393 4727 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245404 4727 flags.go:64] FLAG: --tls-cert-file="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245415 4727 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245431 4727 flags.go:64] FLAG: --tls-min-version="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245443 4727 flags.go:64] FLAG: --tls-private-key-file="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245455 4727 flags.go:64] FLAG: --topology-manager-policy="none" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245466 4727 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245477 4727 flags.go:64] FLAG: --topology-manager-scope="container" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245489 4727 flags.go:64] FLAG: --v="2" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245504 4727 flags.go:64] FLAG: --version="false" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245534 4727 flags.go:64] FLAG: --vmodule="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245559 4727 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.245572 4727 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245817 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245833 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245845 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245855 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245866 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245875 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245885 4727 feature_gate.go:330] unrecognized feature gate: Example Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245895 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245905 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245914 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245924 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245934 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245944 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245953 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.245997 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246006 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246016 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246025 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246035 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246045 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246054 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246065 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246075 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246085 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246095 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246105 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246115 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246124 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246134 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246143 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246152 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246162 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246171 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246186 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246196 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246205 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246218 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246227 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246236 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246246 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246260 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246271 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246283 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246294 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246305 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246316 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246327 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246338 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246349 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246360 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246370 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246379 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246389 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246402 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246414 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246424 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246433 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246443 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246453 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246463 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246474 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246483 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246493 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246504 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246513 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246526 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246538 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246549 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246561 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246577 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.246588 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.246620 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.261777 4727 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.261834 4727 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.261998 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262015 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262025 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262034 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262043 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262052 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262060 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262068 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262077 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262087 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262095 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262104 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262112 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262120 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262127 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262136 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262144 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262152 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262160 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262167 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262175 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262184 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262191 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262199 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262206 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262215 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262226 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262236 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262246 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262260 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262275 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262289 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262300 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262311 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262325 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262336 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262345 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262354 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262364 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262373 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262386 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262399 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262409 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262419 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262431 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262441 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262451 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262460 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262470 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262479 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262489 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262499 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262509 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262517 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262525 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262533 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262541 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262550 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262557 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262565 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262573 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262581 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262589 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262596 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262604 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262612 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262619 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262627 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262635 4727 feature_gate.go:330] unrecognized feature gate: Example Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262642 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262652 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.262665 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262925 4727 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262940 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262949 4727 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262957 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262990 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.262999 4727 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263007 4727 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263014 4727 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263022 4727 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263031 4727 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263039 4727 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263046 4727 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263054 4727 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263062 4727 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263069 4727 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263077 4727 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263085 4727 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263092 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263103 4727 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263110 4727 feature_gate.go:330] unrecognized feature gate: Example Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263118 4727 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263126 4727 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263134 4727 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263141 4727 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263149 4727 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263159 4727 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263169 4727 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263179 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263187 4727 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263195 4727 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263204 4727 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263212 4727 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263220 4727 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263227 4727 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263239 4727 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263248 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263256 4727 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263265 4727 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263272 4727 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263283 4727 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263292 4727 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263300 4727 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263309 4727 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263317 4727 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263325 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263333 4727 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263342 4727 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263350 4727 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263359 4727 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263366 4727 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263376 4727 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263384 4727 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263392 4727 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263399 4727 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263407 4727 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263415 4727 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263422 4727 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263431 4727 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263438 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263446 4727 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263453 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263462 4727 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263469 4727 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263480 4727 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263490 4727 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263499 4727 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263507 4727 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263515 4727 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263525 4727 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263533 4727 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.263543 4727 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.263554 4727 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.263828 4727 server.go:940] "Client rotation is on, will bootstrap in background" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.275921 4727 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.276115 4727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.279312 4727 server.go:997] "Starting client certificate rotation" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.279365 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.279609 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-10 04:37:53.024258581 +0000 UTC Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.279775 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.313227 4727 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.315416 4727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.316866 4727 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.343091 4727 log.go:25] "Validated CRI v1 runtime API" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.387416 4727 log.go:25] "Validated CRI v1 image API" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.389505 4727 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.393323 4727 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-21-20-02-18-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.393526 4727 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.407638 4727 manager.go:217] Machine: {Timestamp:2025-11-21 20:06:35.405415602 +0000 UTC m=+0.591600666 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b99fd01f-0947-456f-ae40-db84d60b2190 BootID:32d79a59-978c-42a8-bb0b-33b3c3206f66 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fa:b9:a3 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fa:b9:a3 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:66:e6:1c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:3f:25:25 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ca:16:05 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:82:3d:58 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d6:eb:e8:a8:1d:48 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6a:93:15:2d:35:af Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.408122 4727 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.408290 4727 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.409531 4727 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.409878 4727 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.409940 4727 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.410340 4727 topology_manager.go:138] "Creating topology manager with none policy" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.410362 4727 container_manager_linux.go:303] "Creating device plugin manager" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.410933 4727 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.411010 4727 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.411310 4727 state_mem.go:36] "Initialized new in-memory state store" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.411463 4727 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.414336 4727 kubelet.go:418] "Attempting to sync node with API server" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.414370 4727 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.414416 4727 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.414441 4727 kubelet.go:324] "Adding apiserver pod source" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.414460 4727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.418285 4727 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.419143 4727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.419453 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.419482 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.419623 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.419664 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.421751 4727 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423791 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423815 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423823 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423830 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423850 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423857 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423864 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423874 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423883 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423892 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423903 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.423911 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.425996 4727 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.426451 4727 server.go:1280] "Started kubelet" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.427474 4727 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.427483 4727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 21 20:06:35 crc systemd[1]: Started Kubernetes Kubelet. Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.431926 4727 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.433786 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.439649 4727 server.go:460] "Adding debug handlers to kubelet server" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.440766 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.440796 4727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.441012 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:18:22.345555808 +0000 UTC Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.441068 4727 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 688h11m46.904491586s for next certificate rotation Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.441158 4727 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.441190 4727 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.441195 4727 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.441148 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.441828 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.441861 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="200ms" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.441882 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.441086 4727 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a1e5e87ffe335 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-21 20:06:35.426423605 +0000 UTC m=+0.612608649,LastTimestamp:2025-11-21 20:06:35.426423605 +0000 UTC m=+0.612608649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.442489 4727 factory.go:55] Registering systemd factory Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.442505 4727 factory.go:221] Registration of the systemd container factory successfully Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.442986 4727 factory.go:153] Registering CRI-O factory Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.443010 4727 factory.go:221] Registration of the crio container factory successfully Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.443069 4727 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.443088 4727 factory.go:103] Registering Raw factory Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.443102 4727 manager.go:1196] Started watching for new ooms in manager Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.443601 4727 manager.go:319] Starting recovery of all containers Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.453885 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.454256 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.454423 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.454552 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.454704 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.454823 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.454944 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.455266 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.455440 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.455565 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.455710 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.455836 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.456043 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.456181 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.456311 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.456444 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.456560 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.457828 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.457884 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.457900 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.457920 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.457934 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.457946 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.457977 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.457990 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458020 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458043 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458063 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458080 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458096 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458111 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458133 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458147 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458246 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458261 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458274 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458347 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458359 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458374 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458387 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458400 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458414 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458426 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458446 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458458 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458472 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458489 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458500 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458514 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458528 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458541 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458557 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458583 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458595 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458616 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458633 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458645 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458662 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458676 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458689 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458703 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458715 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458728 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458748 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458760 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.458776 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.459874 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.459928 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.459945 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.459986 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460001 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460016 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460033 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460049 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460064 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460079 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460094 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460108 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460121 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460136 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460154 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460167 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460184 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460197 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460211 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460226 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460241 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460254 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460270 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460284 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460298 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460314 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460327 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460343 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460362 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460381 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460395 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460411 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460427 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460443 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460456 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460470 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460483 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460497 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460536 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460551 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460568 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460583 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460598 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460615 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460631 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460648 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460666 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460679 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460692 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460705 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460718 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460731 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460744 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460761 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460773 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460785 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460797 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460812 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460824 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460839 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460855 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460875 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460889 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460908 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460920 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460931 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460946 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460976 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.460988 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461001 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461014 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461027 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461042 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461054 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461067 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461079 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461091 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461104 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461118 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461150 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.461170 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.462839 4727 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.462870 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.462885 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.462904 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.462919 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.462933 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.462946 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.462977 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.462991 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463005 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463018 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463032 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463048 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463067 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463084 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463098 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463110 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463122 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463135 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463154 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463173 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463189 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463203 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463216 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463221 4727 manager.go:324] Recovery completed Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463228 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463427 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463490 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463513 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463534 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463554 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463573 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463592 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463610 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463627 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463642 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463659 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463678 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463695 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463710 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463728 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463748 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463769 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463804 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463826 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463850 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463873 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463898 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463921 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463944 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.463994 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464014 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464033 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464059 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464082 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464101 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464119 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464137 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464156 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464191 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464209 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464231 4727 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464246 4727 reconstruct.go:97] "Volume reconstruction finished" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.464258 4727 reconciler.go:26] "Reconciler: start to sync state" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.479019 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.481388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.481430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.481445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.482352 4727 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.482368 4727 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.482387 4727 state_mem.go:36] "Initialized new in-memory state store" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.493839 4727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.497729 4727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.497802 4727 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.497844 4727 kubelet.go:2335] "Starting kubelet main sync loop" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.497924 4727 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.498717 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.498805 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.503860 4727 policy_none.go:49] "None policy: Start" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.504761 4727 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.504798 4727 state_mem.go:35] "Initializing new in-memory state store" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.541557 4727 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.560144 4727 manager.go:334] "Starting Device Plugin manager" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.560261 4727 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.560274 4727 server.go:79] "Starting device plugin registration server" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.560663 4727 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.560682 4727 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.561001 4727 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.561073 4727 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.561085 4727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.567553 4727 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.598872 4727 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.599005 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.600263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.600313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.600325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.600494 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.600708 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.600748 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.601506 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.601510 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.601567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.601580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.601535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.601615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.601687 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.601742 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.601766 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.602622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.602665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.602682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.602645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.602750 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.602763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.602857 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.602990 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.603025 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.603770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.603792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.603800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.603772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.603831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.603841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.603972 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.604082 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.604111 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.604584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.604609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.604617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.604696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.604724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.604735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.604726 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.604846 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.605575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.605605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.605619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.645428 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="400ms" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.660900 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.662237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.662262 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.662271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.662292 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.662720 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.666656 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.666687 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.666704 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.666721 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.666736 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.666818 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.666835 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.666851 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.666866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.666983 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.667055 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.667099 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.667129 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.667168 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.667182 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.768808 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.768873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.768898 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.768919 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.768941 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.768999 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769055 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769077 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769098 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769117 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769135 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769203 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769709 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769796 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769841 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769881 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769998 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769929 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.770065 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.770063 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.770114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.770138 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.770148 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.770184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.769976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.770194 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.863477 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.865754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.865803 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.865815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.865844 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 20:06:35 crc kubenswrapper[4727]: E1121 20:06:35.866465 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.942168 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.962877 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: I1121 20:06:35.980658 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:35 crc kubenswrapper[4727]: W1121 20:06:35.986554 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-274bd35bd6d2aae5f056e533d1560b4ba68d0a6b6b9a3ac6c2bccb98fd486781 WatchSource:0}: Error finding container 274bd35bd6d2aae5f056e533d1560b4ba68d0a6b6b9a3ac6c2bccb98fd486781: Status 404 returned error can't find the container with id 274bd35bd6d2aae5f056e533d1560b4ba68d0a6b6b9a3ac6c2bccb98fd486781 Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.003805 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:36 crc kubenswrapper[4727]: W1121 20:06:36.008555 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-717d733be10baa0d3af7ed3d0e718e4caf5fbc8104bb4df1fc421151e11260fa WatchSource:0}: Error finding container 717d733be10baa0d3af7ed3d0e718e4caf5fbc8104bb4df1fc421151e11260fa: Status 404 returned error can't find the container with id 717d733be10baa0d3af7ed3d0e718e4caf5fbc8104bb4df1fc421151e11260fa Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.010864 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 20:06:36 crc kubenswrapper[4727]: W1121 20:06:36.025349 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-7528ade5b4733eb1367876795aa0b240d43f53917202a037f478a4da9d4ab9a2 WatchSource:0}: Error finding container 7528ade5b4733eb1367876795aa0b240d43f53917202a037f478a4da9d4ab9a2: Status 404 returned error can't find the container with id 7528ade5b4733eb1367876795aa0b240d43f53917202a037f478a4da9d4ab9a2 Nov 21 20:06:36 crc kubenswrapper[4727]: W1121 20:06:36.033652 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-b999813d1bb26d79ec4fe4e5858e466a500bb975a6e0b2cc41f98f7ec8206014 WatchSource:0}: Error finding container b999813d1bb26d79ec4fe4e5858e466a500bb975a6e0b2cc41f98f7ec8206014: Status 404 returned error can't find the container with id b999813d1bb26d79ec4fe4e5858e466a500bb975a6e0b2cc41f98f7ec8206014 Nov 21 20:06:36 crc kubenswrapper[4727]: E1121 20:06:36.047229 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="800ms" Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.267110 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.268820 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.268862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.268878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.268909 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 20:06:36 crc kubenswrapper[4727]: E1121 20:06:36.269392 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.435054 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.502568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7528ade5b4733eb1367876795aa0b240d43f53917202a037f478a4da9d4ab9a2"} Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.503347 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"08aba6d8a3558ec72c4854334eba68a1acefbc558e9b971df68f88c0a7ef9cd8"} Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.504037 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"717d733be10baa0d3af7ed3d0e718e4caf5fbc8104bb4df1fc421151e11260fa"} Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.504798 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"274bd35bd6d2aae5f056e533d1560b4ba68d0a6b6b9a3ac6c2bccb98fd486781"} Nov 21 20:06:36 crc kubenswrapper[4727]: I1121 20:06:36.505399 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b999813d1bb26d79ec4fe4e5858e466a500bb975a6e0b2cc41f98f7ec8206014"} Nov 21 20:06:36 crc kubenswrapper[4727]: W1121 20:06:36.721905 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:36 crc kubenswrapper[4727]: E1121 20:06:36.722055 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:36 crc kubenswrapper[4727]: W1121 20:06:36.770378 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:36 crc kubenswrapper[4727]: E1121 20:06:36.770468 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:36 crc kubenswrapper[4727]: W1121 20:06:36.777203 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:36 crc kubenswrapper[4727]: E1121 20:06:36.777242 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:36 crc kubenswrapper[4727]: E1121 20:06:36.848179 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="1.6s" Nov 21 20:06:37 crc kubenswrapper[4727]: W1121 20:06:37.020235 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:37 crc kubenswrapper[4727]: E1121 20:06:37.020349 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.070270 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.071897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.071987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.072008 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.072047 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 20:06:37 crc kubenswrapper[4727]: E1121 20:06:37.072684 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.401522 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 21 20:06:37 crc kubenswrapper[4727]: E1121 20:06:37.402797 4727 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.435695 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.512054 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201"} Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.512126 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0"} Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.512141 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f"} Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.512155 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255"} Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.512175 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.513714 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817" exitCode=0 Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.513806 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817"} Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.513899 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.514103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.514156 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.514169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.515798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.515825 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.515837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.516484 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc" exitCode=0 Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.516519 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc"} Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.516881 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.517562 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.518425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.518453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.518464 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.518577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.518632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.518653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.519252 4727 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2" exitCode=0 Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.519329 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2"} Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.519337 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.520293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.520366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.520390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.521413 4727 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22" exitCode=0 Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.521445 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22"} Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.521543 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.523487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.523562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:37 crc kubenswrapper[4727]: I1121 20:06:37.523587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.152763 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.435018 4727 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:38 crc kubenswrapper[4727]: E1121 20:06:38.449292 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="3.2s" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.525348 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9" exitCode=0 Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.525439 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9"} Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.525469 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.526360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.526392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.526404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.527129 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.527141 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"333e986e96f39eed917fea4527a9e2bc0dcdfe7824b75aefc2fdc8e747b49300"} Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.528115 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.528151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.528164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.529660 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b"} Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.529706 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.529712 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b"} Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.529730 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d"} Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.530469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.530496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.530504 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.531921 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"de7aca7b376d53b74769b52a2cd9d5e1f4736551b442f71e77b2fac5fb4bd1c6"} Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.531952 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.531999 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.531972 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119"} Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.532104 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1"} Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.532121 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e"} Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.532134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f"} Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.533870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.533905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.533916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.534714 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.534754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.534766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.673645 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.674744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.674787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.674798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:38 crc kubenswrapper[4727]: I1121 20:06:38.674830 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 20:06:38 crc kubenswrapper[4727]: E1121 20:06:38.675264 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Nov 21 20:06:39 crc kubenswrapper[4727]: W1121 20:06:39.063726 4727 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Nov 21 20:06:39 crc kubenswrapper[4727]: E1121 20:06:39.063808 4727 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.535677 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.537368 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="de7aca7b376d53b74769b52a2cd9d5e1f4736551b442f71e77b2fac5fb4bd1c6" exitCode=255 Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.537450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"de7aca7b376d53b74769b52a2cd9d5e1f4736551b442f71e77b2fac5fb4bd1c6"} Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.537493 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.538521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.538562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.538578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.539461 4727 scope.go:117] "RemoveContainer" containerID="de7aca7b376d53b74769b52a2cd9d5e1f4736551b442f71e77b2fac5fb4bd1c6" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.540483 4727 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0" exitCode=0 Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.540562 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.540553 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0"} Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.540607 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.540650 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.540651 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.540650 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.541785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.541803 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.541811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.542292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.542310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.542318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.542722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.542778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.542801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.542856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.542873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.542884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:39 crc kubenswrapper[4727]: I1121 20:06:39.813772 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.114432 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.544988 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.547380 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.547393 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61"} Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.547430 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.548880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.548938 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.548993 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.553052 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202"} Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.553115 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7"} Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.553128 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368"} Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.553137 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e"} Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.553148 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7"} Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.553201 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.553201 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.554665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.554694 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.554705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.554741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.554765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:40 crc kubenswrapper[4727]: I1121 20:06:40.554775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.556518 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.556582 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.556581 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.557652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.557689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.557701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.558055 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.558098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.558114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.568097 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.619572 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.645476 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.645745 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.647546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.647580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.647591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.656149 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.876305 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.877713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.877753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.877766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:41 crc kubenswrapper[4727]: I1121 20:06:41.877790 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 20:06:42 crc kubenswrapper[4727]: I1121 20:06:42.559857 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:42 crc kubenswrapper[4727]: I1121 20:06:42.559906 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:06:42 crc kubenswrapper[4727]: I1121 20:06:42.559979 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:42 crc kubenswrapper[4727]: I1121 20:06:42.560922 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:42 crc kubenswrapper[4727]: I1121 20:06:42.560950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:42 crc kubenswrapper[4727]: I1121 20:06:42.560988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:42 crc kubenswrapper[4727]: I1121 20:06:42.560953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:42 crc kubenswrapper[4727]: I1121 20:06:42.561036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:42 crc kubenswrapper[4727]: I1121 20:06:42.561068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:43 crc kubenswrapper[4727]: I1121 20:06:43.121124 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:43 crc kubenswrapper[4727]: I1121 20:06:43.562430 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:43 crc kubenswrapper[4727]: I1121 20:06:43.563303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:43 crc kubenswrapper[4727]: I1121 20:06:43.563339 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:43 crc kubenswrapper[4727]: I1121 20:06:43.563351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:44 crc kubenswrapper[4727]: I1121 20:06:44.905246 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 21 20:06:44 crc kubenswrapper[4727]: I1121 20:06:44.905566 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:44 crc kubenswrapper[4727]: I1121 20:06:44.907361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:44 crc kubenswrapper[4727]: I1121 20:06:44.907408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:44 crc kubenswrapper[4727]: I1121 20:06:44.907427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:45 crc kubenswrapper[4727]: E1121 20:06:45.568019 4727 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.720373 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.720565 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.722158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.722218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.722236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.726383 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.831513 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.831705 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.832916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.832944 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:46 crc kubenswrapper[4727]: I1121 20:06:46.832952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:47 crc kubenswrapper[4727]: I1121 20:06:47.135872 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:47 crc kubenswrapper[4727]: I1121 20:06:47.572235 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:47 crc kubenswrapper[4727]: I1121 20:06:47.573004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:47 crc kubenswrapper[4727]: I1121 20:06:47.573031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:47 crc kubenswrapper[4727]: I1121 20:06:47.573039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:48 crc kubenswrapper[4727]: I1121 20:06:48.574567 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:48 crc kubenswrapper[4727]: I1121 20:06:48.575515 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:48 crc kubenswrapper[4727]: I1121 20:06:48.575550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:48 crc kubenswrapper[4727]: I1121 20:06:48.575564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:49 crc kubenswrapper[4727]: I1121 20:06:49.181367 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 21 20:06:49 crc kubenswrapper[4727]: I1121 20:06:49.181664 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 21 20:06:49 crc kubenswrapper[4727]: I1121 20:06:49.185739 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 21 20:06:49 crc kubenswrapper[4727]: I1121 20:06:49.185830 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 21 20:06:49 crc kubenswrapper[4727]: I1121 20:06:49.721450 4727 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 20:06:49 crc kubenswrapper[4727]: I1121 20:06:49.721529 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 21 20:06:51 crc kubenswrapper[4727]: I1121 20:06:51.344277 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 20:06:51 crc kubenswrapper[4727]: I1121 20:06:51.344334 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 20:06:51 crc kubenswrapper[4727]: I1121 20:06:51.573378 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:51 crc kubenswrapper[4727]: I1121 20:06:51.573634 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:51 crc kubenswrapper[4727]: I1121 20:06:51.574183 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 20:06:51 crc kubenswrapper[4727]: I1121 20:06:51.574254 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 20:06:51 crc kubenswrapper[4727]: I1121 20:06:51.578294 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:51 crc kubenswrapper[4727]: I1121 20:06:51.578339 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:51 crc kubenswrapper[4727]: I1121 20:06:51.578352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:51 crc kubenswrapper[4727]: I1121 20:06:51.579202 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:06:52 crc kubenswrapper[4727]: I1121 20:06:52.582267 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:06:52 crc kubenswrapper[4727]: I1121 20:06:52.582670 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 20:06:52 crc kubenswrapper[4727]: I1121 20:06:52.582832 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 20:06:52 crc kubenswrapper[4727]: I1121 20:06:52.583583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:06:52 crc kubenswrapper[4727]: I1121 20:06:52.583674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:06:52 crc kubenswrapper[4727]: I1121 20:06:52.583777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:06:53 crc kubenswrapper[4727]: I1121 20:06:53.121941 4727 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 20:06:53 crc kubenswrapper[4727]: I1121 20:06:53.122042 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.181850 4727 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.183354 4727 trace.go:236] Trace[1518735802]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Nov-2025 20:06:39.443) (total time: 14739ms): Nov 21 20:06:54 crc kubenswrapper[4727]: Trace[1518735802]: ---"Objects listed" error: 14739ms (20:06:54.183) Nov 21 20:06:54 crc kubenswrapper[4727]: Trace[1518735802]: [14.73930711s] [14.73930711s] END Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.183376 4727 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.184207 4727 trace.go:236] Trace[2046675545]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Nov-2025 20:06:39.910) (total time: 14273ms): Nov 21 20:06:54 crc kubenswrapper[4727]: Trace[2046675545]: ---"Objects listed" error: 14273ms (20:06:54.184) Nov 21 20:06:54 crc kubenswrapper[4727]: Trace[2046675545]: [14.273811573s] [14.273811573s] END Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.184252 4727 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.184312 4727 trace.go:236] Trace[1152950340]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Nov-2025 20:06:44.094) (total time: 10089ms): Nov 21 20:06:54 crc kubenswrapper[4727]: Trace[1152950340]: ---"Objects listed" error: 10089ms (20:06:54.184) Nov 21 20:06:54 crc kubenswrapper[4727]: Trace[1152950340]: [10.089979619s] [10.089979619s] END Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.184339 4727 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.184673 4727 trace.go:236] Trace[443044052]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Nov-2025 20:06:39.673) (total time: 14510ms): Nov 21 20:06:54 crc kubenswrapper[4727]: Trace[443044052]: ---"Objects listed" error: 14510ms (20:06:54.184) Nov 21 20:06:54 crc kubenswrapper[4727]: Trace[443044052]: [14.510822126s] [14.510822126s] END Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.184697 4727 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.184995 4727 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.187045 4727 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.196234 4727 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.210975 4727 csr.go:261] certificate signing request csr-sn2t5 is approved, waiting to be issued Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.219483 4727 csr.go:257] certificate signing request csr-sn2t5 is issued Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.423358 4727 apiserver.go:52] "Watching apiserver" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.426354 4727 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.426486 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.426816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.426854 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.426933 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.426985 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.426940 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.427079 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.427097 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.427184 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.427284 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.428270 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.429051 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.429051 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.429677 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.429691 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.429985 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.430299 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.430479 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.431139 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.442001 4727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.472245 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486252 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486294 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486315 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486331 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486347 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486365 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486381 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486395 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486409 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486424 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486438 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486454 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486470 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486488 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486503 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486518 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486532 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486549 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486566 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486581 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486598 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486613 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486630 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486645 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486670 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486685 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486707 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486697 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486698 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486752 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486716 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486725 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486862 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486864 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486897 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486906 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486923 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486908 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486988 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.486997 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487028 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487051 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487074 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487091 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487096 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487209 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487237 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487258 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487279 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487301 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487322 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487344 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487367 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487389 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487412 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487437 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487460 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487486 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487506 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487527 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487550 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487572 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487593 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487619 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487640 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487662 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487682 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487705 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487728 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487748 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487782 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487804 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487827 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487849 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487869 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487893 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487914 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487935 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487983 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488009 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488032 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488056 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488080 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488102 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488343 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488365 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488387 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488415 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488439 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488464 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488490 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488512 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488556 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488576 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488597 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488618 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488639 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488662 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488684 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488705 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488726 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488745 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488765 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488787 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488811 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488835 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488895 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488922 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488944 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488985 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489014 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489038 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489060 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489081 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489103 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489124 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489146 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489166 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489187 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489209 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489233 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489255 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489279 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489303 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489325 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489351 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489375 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489398 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489419 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489440 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489462 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489484 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489506 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489532 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489553 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489576 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489598 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489619 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489643 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489667 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489689 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489710 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489732 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489755 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489778 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489802 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489825 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489848 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489870 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489893 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489914 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489936 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489975 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489997 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490023 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490044 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490065 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490087 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490111 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490134 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490156 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490241 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490264 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490285 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490306 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490328 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490350 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490371 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490394 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490419 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490442 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490464 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490485 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490508 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490532 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490610 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490635 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490659 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490681 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490703 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490724 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490745 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490768 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490791 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490812 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490834 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490855 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490876 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490898 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490922 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.490946 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491000 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491023 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491045 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491066 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491088 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491113 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491137 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491159 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491191 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491212 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491239 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491265 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491314 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491345 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491372 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491397 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491420 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491445 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491494 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491518 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491540 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491589 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491616 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491640 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491744 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491761 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491776 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491788 4727 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491800 4727 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491813 4727 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491825 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491837 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491850 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.491862 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487119 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493827 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493876 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.494033 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.494118 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.494376 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.494376 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487131 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487160 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487197 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487295 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487391 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487455 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487447 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487524 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487532 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487631 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487676 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487673 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487805 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487830 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.487987 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488103 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488215 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488317 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488375 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488405 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488587 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488611 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488784 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488826 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488970 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.488980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489007 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489141 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489160 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489313 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.489317 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.492412 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.492836 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.492910 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493002 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493068 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493265 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493282 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493311 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493397 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493503 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493532 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493690 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.494563 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.493797 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.494634 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.494985 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.495156 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.495188 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.495480 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.495662 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.495716 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.495828 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.495880 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.495933 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.496006 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.496027 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.496136 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.496264 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.496416 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.496469 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.496572 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.496861 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.497034 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.497214 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.497627 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.497651 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.497862 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.497940 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.498036 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.498040 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.500116 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.500312 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.500529 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.500577 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.500775 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.500973 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.501133 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.501271 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.501430 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.501624 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.501672 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.501802 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.504277 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.505261 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.505501 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.505606 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.505621 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.505848 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.506203 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.506382 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.506449 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.506539 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.506727 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.506754 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.506968 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.507012 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.507103 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.507284 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.507316 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.507327 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.507461 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.507530 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.507776 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.507821 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.507927 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.508167 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.508309 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.508340 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.508520 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.509042 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.509490 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.509611 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.509749 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.509801 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.510052 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.510105 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.510283 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.510457 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.510589 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.510634 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.511103 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.511209 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:06:55.011187823 +0000 UTC m=+20.197372947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.511371 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.511409 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.511553 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.512846 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.513398 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.513557 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.513610 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.513612 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.513742 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.514841 4727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.515595 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.515637 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.516109 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.516345 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.516587 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.516759 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.516894 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.517340 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.517440 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.517464 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.517750 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.518282 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.518609 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.519058 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.519177 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.519252 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:55.019232954 +0000 UTC m=+20.205418068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.519252 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.521119 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.521624 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.521698 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.521898 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.522097 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.523096 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.523239 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.523298 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:55.023277371 +0000 UTC m=+20.209462415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.524494 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.524689 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.524942 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.524988 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.525074 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.525557 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.525722 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.525896 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.530139 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.530401 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.533275 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.533297 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.533308 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.533355 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:55.033340336 +0000 UTC m=+20.219525380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.534400 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.536471 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.536932 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.536978 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.537509 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.537979 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.538001 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.538012 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:54 crc kubenswrapper[4727]: E1121 20:06:54.538046 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:55.038034637 +0000 UTC m=+20.224219681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.540619 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.542101 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.542732 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.544726 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.546018 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.546171 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.546223 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.546337 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.546837 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.546948 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.547174 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.547297 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.547368 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.550617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.551818 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.553611 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.565034 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.578311 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.579359 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.580736 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592216 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592318 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592384 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592398 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592411 4727 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592422 4727 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592434 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592445 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592458 4727 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592469 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592480 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592501 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592512 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592520 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592528 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592536 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592544 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592551 4727 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592559 4727 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592567 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592575 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592586 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592599 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592610 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592623 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592635 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592645 4727 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592653 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592661 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592669 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592680 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592691 4727 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592704 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592715 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592727 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592739 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592751 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592763 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592775 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592785 4727 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592795 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592806 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592816 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.592951 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593038 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593090 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593235 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593358 4727 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593371 4727 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593381 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593391 4727 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593399 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593408 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593420 4727 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593432 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593460 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593479 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593492 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593506 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593518 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593529 4727 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593540 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593552 4727 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593564 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593577 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593589 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593604 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593616 4727 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593628 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593640 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593652 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593664 4727 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593675 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593686 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593699 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593711 4727 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593723 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593736 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593748 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593760 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593771 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593782 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593793 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593803 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593815 4727 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593826 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593837 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593848 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593858 4727 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593867 4727 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593878 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593888 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593900 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593913 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593923 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593935 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593947 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593976 4727 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.593989 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594005 4727 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594017 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594027 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594037 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594048 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594060 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594071 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594081 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594091 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594101 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594111 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594120 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594130 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594141 4727 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594152 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594162 4727 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594174 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594186 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594197 4727 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594207 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594235 4727 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594246 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594256 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594266 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594275 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594286 4727 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594298 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594313 4727 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594324 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594336 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594346 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594357 4727 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594369 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594380 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594390 4727 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594400 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594411 4727 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594424 4727 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594435 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594445 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594457 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594469 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594481 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594489 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594497 4727 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594504 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594512 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594520 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594528 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594535 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594543 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594552 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594561 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594571 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594579 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594587 4727 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594594 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594602 4727 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594609 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594618 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594625 4727 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594633 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594641 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594648 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594656 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594664 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594673 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594681 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594689 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594697 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594705 4727 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594712 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594720 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594727 4727 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594736 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594744 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594752 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594760 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594769 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594777 4727 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594784 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594792 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594800 4727 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594809 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594820 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594828 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594836 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594844 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.594860 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.597404 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.603936 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.613393 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.622802 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.632430 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.695469 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.742466 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.755456 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.761786 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 20:06:54 crc kubenswrapper[4727]: W1121 20:06:54.767303 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3d465144f50c8a17a1f94ba202df446f1841f0e1a213ee3de94de3d580104376 WatchSource:0}: Error finding container 3d465144f50c8a17a1f94ba202df446f1841f0e1a213ee3de94de3d580104376: Status 404 returned error can't find the container with id 3d465144f50c8a17a1f94ba202df446f1841f0e1a213ee3de94de3d580104376 Nov 21 20:06:54 crc kubenswrapper[4727]: W1121 20:06:54.778482 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b276863efdd611292f3e7779ef552212ddb4eca65c22bd19dc94fc062961a263 WatchSource:0}: Error finding container b276863efdd611292f3e7779ef552212ddb4eca65c22bd19dc94fc062961a263: Status 404 returned error can't find the container with id b276863efdd611292f3e7779ef552212ddb4eca65c22bd19dc94fc062961a263 Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.861151 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5k2kk"] Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.861513 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-ccfbn"] Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.861682 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.861744 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ccfbn" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.864584 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.864706 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.864827 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.864872 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.864990 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.865238 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.865291 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.865418 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.886692 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.902677 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.914515 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.923200 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.933773 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.945372 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.957349 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.966832 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.978890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.989003 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.997786 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6qf\" (UniqueName: \"kubernetes.io/projected/0bffa327-bdf2-45d4-93ab-40152e82d177-kube-api-access-kk6qf\") pod \"node-resolver-ccfbn\" (UID: \"0bffa327-bdf2-45d4-93ab-40152e82d177\") " pod="openshift-dns/node-resolver-ccfbn" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.997833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-mcd-auth-proxy-config\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.997857 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9tm\" (UniqueName: \"kubernetes.io/projected/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-kube-api-access-zg9tm\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.997882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0bffa327-bdf2-45d4-93ab-40152e82d177-hosts-file\") pod \"node-resolver-ccfbn\" (UID: \"0bffa327-bdf2-45d4-93ab-40152e82d177\") " pod="openshift-dns/node-resolver-ccfbn" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.997898 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-rootfs\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.997914 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-proxy-tls\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:54 crc kubenswrapper[4727]: I1121 20:06:54.999660 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.009535 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.017810 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.028405 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.040345 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.050813 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099049 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099108 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-rootfs\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099128 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-proxy-tls\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099146 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099163 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099201 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk6qf\" (UniqueName: \"kubernetes.io/projected/0bffa327-bdf2-45d4-93ab-40152e82d177-kube-api-access-kk6qf\") pod \"node-resolver-ccfbn\" (UID: \"0bffa327-bdf2-45d4-93ab-40152e82d177\") " pod="openshift-dns/node-resolver-ccfbn" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099219 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-mcd-auth-proxy-config\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099216 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-rootfs\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099289 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099296 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg9tm\" (UniqueName: \"kubernetes.io/projected/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-kube-api-access-zg9tm\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099320 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099354 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099329 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099358 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:56.099326088 +0000 UTC m=+21.285511192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0bffa327-bdf2-45d4-93ab-40152e82d177-hosts-file\") pod \"node-resolver-ccfbn\" (UID: \"0bffa327-bdf2-45d4-93ab-40152e82d177\") " pod="openshift-dns/node-resolver-ccfbn" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099457 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0bffa327-bdf2-45d4-93ab-40152e82d177-hosts-file\") pod \"node-resolver-ccfbn\" (UID: \"0bffa327-bdf2-45d4-93ab-40152e82d177\") " pod="openshift-dns/node-resolver-ccfbn" Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099367 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099321 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099553 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:56.099533402 +0000 UTC m=+21.285718506 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099555 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099587 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099618 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:56.099609783 +0000 UTC m=+21.285794927 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099529 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099651 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:56.099643784 +0000 UTC m=+21.285828828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.099668 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:06:56.099658014 +0000 UTC m=+21.285843178 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.099994 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-mcd-auth-proxy-config\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.103289 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-proxy-tls\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.115775 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk6qf\" (UniqueName: \"kubernetes.io/projected/0bffa327-bdf2-45d4-93ab-40152e82d177-kube-api-access-kk6qf\") pod \"node-resolver-ccfbn\" (UID: \"0bffa327-bdf2-45d4-93ab-40152e82d177\") " pod="openshift-dns/node-resolver-ccfbn" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.124106 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg9tm\" (UniqueName: \"kubernetes.io/projected/b58aef8f-f223-47d8-a2e6-4a80aeeeec42-kube-api-access-zg9tm\") pod \"machine-config-daemon-5k2kk\" (UID: \"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\") " pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.192046 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.201246 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb58aef8f_f223_47d8_a2e6_4a80aeeeec42.slice/crio-d90de66751b8be0ffa49c17938c61a9fa4fc34a97f506cae730241f5e353dadd WatchSource:0}: Error finding container d90de66751b8be0ffa49c17938c61a9fa4fc34a97f506cae730241f5e353dadd: Status 404 returned error can't find the container with id d90de66751b8be0ffa49c17938c61a9fa4fc34a97f506cae730241f5e353dadd Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.219579 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ccfbn" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.220217 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-21 20:01:54 +0000 UTC, rotation deadline is 2026-08-12 12:51:12.112572269 +0000 UTC Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.220319 4727 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6328h44m16.892256585s for next certificate rotation Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.255246 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-7rvdc"] Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.255764 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.260365 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.260824 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.263584 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.263649 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.263768 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.267554 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-74crp"] Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.268351 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.270264 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.270982 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.278569 4727 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.278822 4727 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.278856 4727 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.278888 4727 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.278907 4727 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.278830 4727 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.278886 4727 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279093 4727 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279116 4727 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279120 4727 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279138 4727 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279160 4727 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279163 4727 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279182 4727 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279140 4727 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279205 4727 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279222 4727 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279323 4727 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279376 4727 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279409 4727 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279423 4727 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279436 4727 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279467 4727 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.279513 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-4ln5h/status\": read tcp 38.102.83.179:36356->38.102.83.179:6443: use of closed network connection" Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279663 4727 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.279749 4727 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.313688 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.344337 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.354818 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.364174 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.376631 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.384395 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.392385 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.401726 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-var-lib-cni-bin\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.401805 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-daemon-config\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.401823 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db8a4e70-c074-4f90-aebe-444078f3337f-cni-binary-copy\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.401873 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/db8a4e70-c074-4f90-aebe-444078f3337f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.401892 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-run-k8s-cni-cncf-io\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.401914 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brm4g\" (UniqueName: \"kubernetes.io/projected/db8a4e70-c074-4f90-aebe-444078f3337f-kube-api-access-brm4g\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.401931 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-cnibin\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.401946 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.401986 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-hostroot\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402005 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-etc-kubernetes\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402188 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-socket-dir-parent\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402281 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-run-netns\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402317 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-conf-dir\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402365 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-cni-dir\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402405 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-os-release\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402484 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-system-cni-dir\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402534 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-var-lib-cni-multus\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402585 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-os-release\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402619 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-cnibin\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/07dba644-eb6f-45c3-b373-7a1610c569aa-cni-binary-copy\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402697 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-var-lib-kubelet\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-run-multus-certs\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402773 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnm5x\" (UniqueName: \"kubernetes.io/projected/07dba644-eb6f-45c3-b373-7a1610c569aa-kube-api-access-wnm5x\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402808 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-system-cni-dir\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.402809 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.415141 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.427931 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.436764 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.452117 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.461973 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.473607 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.486370 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.498487 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.498614 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.498677 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503199 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-system-cni-dir\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-var-lib-cni-multus\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503258 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-os-release\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503280 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-cnibin\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503300 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/07dba644-eb6f-45c3-b373-7a1610c569aa-cni-binary-copy\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503326 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-var-lib-kubelet\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503353 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-run-multus-certs\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503374 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnm5x\" (UniqueName: \"kubernetes.io/projected/07dba644-eb6f-45c3-b373-7a1610c569aa-kube-api-access-wnm5x\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-system-cni-dir\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503423 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-var-lib-cni-bin\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-var-lib-cni-multus\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503447 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-daemon-config\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503483 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db8a4e70-c074-4f90-aebe-444078f3337f-cni-binary-copy\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503508 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/db8a4e70-c074-4f90-aebe-444078f3337f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503532 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-run-k8s-cni-cncf-io\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503555 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brm4g\" (UniqueName: \"kubernetes.io/projected/db8a4e70-c074-4f90-aebe-444078f3337f-kube-api-access-brm4g\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503586 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-hostroot\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503606 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-etc-kubernetes\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503625 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-cnibin\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503647 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503678 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-socket-dir-parent\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503689 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503687 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-run-k8s-cni-cncf-io\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503725 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-run-netns\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503720 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-hostroot\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503733 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-etc-kubernetes\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-system-cni-dir\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503774 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-cnibin\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503821 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-socket-dir-parent\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503815 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-os-release\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503857 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-var-lib-kubelet\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503874 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-run-multus-certs\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-cnibin\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503910 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-system-cni-dir\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503934 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-var-lib-cni-bin\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.503700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-host-run-netns\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504037 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-conf-dir\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504077 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-cni-dir\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-os-release\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-conf-dir\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504181 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-cni-dir\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504215 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/07dba644-eb6f-45c3-b373-7a1610c569aa-os-release\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504349 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/07dba644-eb6f-45c3-b373-7a1610c569aa-multus-daemon-config\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504486 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/07dba644-eb6f-45c3-b373-7a1610c569aa-cni-binary-copy\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504541 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db8a4e70-c074-4f90-aebe-444078f3337f-cni-binary-copy\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504607 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/db8a4e70-c074-4f90-aebe-444078f3337f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.504675 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.505838 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.506342 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db8a4e70-c074-4f90-aebe-444078f3337f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.506537 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.507543 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.508101 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.508734 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.509811 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.510468 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.511511 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.511617 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.512096 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.513300 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.513994 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.514601 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.515783 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.516460 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.517638 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.518145 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.518852 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.520251 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.520384 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brm4g\" (UniqueName: \"kubernetes.io/projected/db8a4e70-c074-4f90-aebe-444078f3337f-kube-api-access-brm4g\") pod \"multus-additional-cni-plugins-74crp\" (UID: \"db8a4e70-c074-4f90-aebe-444078f3337f\") " pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.520833 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.521824 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.522297 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.523343 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnm5x\" (UniqueName: \"kubernetes.io/projected/07dba644-eb6f-45c3-b373-7a1610c569aa-kube-api-access-wnm5x\") pod \"multus-7rvdc\" (UID: \"07dba644-eb6f-45c3-b373-7a1610c569aa\") " pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.523489 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.524058 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.525314 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.526127 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.527205 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.527892 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.528511 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.528924 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.529550 4727 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.529671 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.531545 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.532520 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.532916 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.534735 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.535750 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.536278 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.537405 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.538078 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.539009 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.539678 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.540760 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.541442 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.542332 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.542886 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.542886 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.543768 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.544513 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.545436 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.545899 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.546779 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.547445 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.548073 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.549122 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.560016 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.568680 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7rvdc" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.576487 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: W1121 20:06:55.580724 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07dba644_eb6f_45c3_b373_7a1610c569aa.slice/crio-091eb8aadffdbe2de6bbfb3e736474b8195198f21cb4e485add93d68db1b8c94 WatchSource:0}: Error finding container 091eb8aadffdbe2de6bbfb3e736474b8195198f21cb4e485add93d68db1b8c94: Status 404 returned error can't find the container with id 091eb8aadffdbe2de6bbfb3e736474b8195198f21cb4e485add93d68db1b8c94 Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.589895 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.591558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.591601 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.591615 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3d465144f50c8a17a1f94ba202df446f1841f0e1a213ee3de94de3d580104376"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.592486 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-74crp" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.594296 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.594422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.594511 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"d90de66751b8be0ffa49c17938c61a9fa4fc34a97f506cae730241f5e353dadd"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.595368 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b276863efdd611292f3e7779ef552212ddb4eca65c22bd19dc94fc062961a263"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.597341 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.597381 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bb8380e046f6152ca952fa2d3e2d404137d834232952f49a5addd064a8b10db3"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.599069 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.599452 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.601260 4727 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61" exitCode=255 Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.601319 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.601371 4727 scope.go:117] "RemoveContainer" containerID="de7aca7b376d53b74769b52a2cd9d5e1f4736551b442f71e77b2fac5fb4bd1c6" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.602043 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.608317 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7rvdc" event={"ID":"07dba644-eb6f-45c3-b373-7a1610c569aa","Type":"ContainerStarted","Data":"091eb8aadffdbe2de6bbfb3e736474b8195198f21cb4e485add93d68db1b8c94"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.612724 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ccfbn" event={"ID":"0bffa327-bdf2-45d4-93ab-40152e82d177","Type":"ContainerStarted","Data":"18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.612790 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ccfbn" event={"ID":"0bffa327-bdf2-45d4-93ab-40152e82d177","Type":"ContainerStarted","Data":"cd0c9338dfdb3d8adea7dc7b55d4bb19b5dd45a434c925d9be8a58dd9acc9513"} Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.622683 4727 scope.go:117] "RemoveContainer" containerID="dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61" Nov 21 20:06:55 crc kubenswrapper[4727]: E1121 20:06:55.622893 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.623206 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.624554 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.640595 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.653420 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tfd4j"] Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.654401 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.656297 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.656840 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.657014 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.657078 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.657322 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.657366 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.657490 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.658447 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.672251 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.690035 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.702235 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.719948 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.734456 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.747280 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.756840 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.768367 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.778735 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.790593 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.801320 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812012 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-systemd-units\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812048 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-ovn\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812081 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-kubelet\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812097 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-log-socket\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812112 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-slash\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812187 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-bin\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812298 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812359 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-env-overrides\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx4mt\" (UniqueName: \"kubernetes.io/projected/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-kube-api-access-rx4mt\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812416 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-netns\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812438 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-ovn-kubernetes\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812496 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-var-lib-openvswitch\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812761 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-systemd\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-node-log\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812806 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovn-node-metrics-cert\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812824 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-openvswitch\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812842 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-script-lib\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812881 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-netd\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812915 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-config\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.812931 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-etc-openvswitch\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.834258 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.873890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7aca7b376d53b74769b52a2cd9d5e1f4736551b442f71e77b2fac5fb4bd1c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"message\\\":\\\"W1121 20:06:38.621471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1121 20:06:38.621888 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763755598 cert, and key in /tmp/serving-cert-3083069496/serving-signer.crt, /tmp/serving-cert-3083069496/serving-signer.key\\\\nI1121 20:06:38.831519 1 observer_polling.go:159] Starting file observer\\\\nW1121 20:06:38.834793 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 20:06:38.834948 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:38.836414 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3083069496/tls.crt::/tmp/serving-cert-3083069496/tls.key\\\\\\\"\\\\nF1121 20:06:39.147124 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.913942 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914189 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-systemd\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914221 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-node-log\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914243 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovn-node-metrics-cert\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-openvswitch\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914291 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-script-lib\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914343 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-openvswitch\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914356 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-etc-openvswitch\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914383 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-netd\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914384 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-node-log\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914318 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-systemd\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914452 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-netd\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914399 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-config\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914480 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-etc-openvswitch\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914524 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-systemd-units\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914507 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-systemd-units\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914575 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-ovn\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914616 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-kubelet\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914637 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-log-socket\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914652 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-slash\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-bin\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914685 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-log-socket\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914688 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914708 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914735 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-ovn\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914745 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-slash\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914743 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-env-overrides\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914801 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx4mt\" (UniqueName: \"kubernetes.io/projected/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-kube-api-access-rx4mt\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914834 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-netns\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914757 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-bin\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914856 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-ovn-kubernetes\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914872 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-netns\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914688 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-kubelet\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914885 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-var-lib-openvswitch\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914906 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-ovn-kubernetes\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.914979 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-var-lib-openvswitch\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.915435 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-config\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.915466 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-env-overrides\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.915437 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-script-lib\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.980722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovn-node-metrics-cert\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:55 crc kubenswrapper[4727]: I1121 20:06:55.981359 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx4mt\" (UniqueName: \"kubernetes.io/projected/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-kube-api-access-rx4mt\") pod \"ovnkube-node-tfd4j\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.092121 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.116501 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.116602 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.116628 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.116647 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.116725 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:06:58.116695948 +0000 UTC m=+23.302881002 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.116738 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.116745 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.116777 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.116813 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.116788 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:58.11677559 +0000 UTC m=+23.302960634 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.116837 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.116845 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:58.116835521 +0000 UTC m=+23.303020575 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.116886 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.116907 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:58.116883682 +0000 UTC m=+23.303068796 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.117042 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.117062 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.117074 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.117111 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 20:06:58.117102277 +0000 UTC m=+23.303287321 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.118167 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.191073 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.232777 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.240453 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.243558 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.276452 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.284378 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: W1121 20:06:56.287303 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70d2ca13_a8f7_43dc_8ad0_142d99ccde18.slice/crio-427acc50520025071d33ce155ca67808c9d7cf83b3761db2a5f2fc6e0bb22af9 WatchSource:0}: Error finding container 427acc50520025071d33ce155ca67808c9d7cf83b3761db2a5f2fc6e0bb22af9: Status 404 returned error can't find the container with id 427acc50520025071d33ce155ca67808c9d7cf83b3761db2a5f2fc6e0bb22af9 Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.300807 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.312818 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.329667 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.377788 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.397420 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.402040 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.431170 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.449130 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.466919 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.498625 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.498652 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.498742 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.498849 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.596277 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.611535 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.616640 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577" exitCode=0 Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.616712 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577"} Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.616747 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"427acc50520025071d33ce155ca67808c9d7cf83b3761db2a5f2fc6e0bb22af9"} Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.618379 4727 generic.go:334] "Generic (PLEG): container finished" podID="db8a4e70-c074-4f90-aebe-444078f3337f" containerID="6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092" exitCode=0 Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.618451 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" event={"ID":"db8a4e70-c074-4f90-aebe-444078f3337f","Type":"ContainerDied","Data":"6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092"} Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.618487 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" event={"ID":"db8a4e70-c074-4f90-aebe-444078f3337f","Type":"ContainerStarted","Data":"332955b11e9a46eaa02f1a6aa80a158c0b239258ceff81d1724bcd1321ae39d8"} Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.620589 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.622823 4727 scope.go:117] "RemoveContainer" containerID="dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61" Nov 21 20:06:56 crc kubenswrapper[4727]: E1121 20:06:56.623035 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.623381 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7rvdc" event={"ID":"07dba644-eb6f-45c3-b373-7a1610c569aa","Type":"ContainerStarted","Data":"7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006"} Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.640024 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.658155 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.669278 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.681984 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.685592 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.700668 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.725686 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.725618 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.727642 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.729740 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.734574 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.741297 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.744156 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.745322 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.758600 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.758687 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7444p"] Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.759132 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.761942 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.765406 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.784861 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.805557 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.823218 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w57k\" (UniqueName: \"kubernetes.io/projected/75a43503-e538-4964-9789-322839cc4c48-kube-api-access-9w57k\") pod \"node-ca-7444p\" (UID: \"75a43503-e538-4964-9789-322839cc4c48\") " pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.823271 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/75a43503-e538-4964-9789-322839cc4c48-host\") pod \"node-ca-7444p\" (UID: \"75a43503-e538-4964-9789-322839cc4c48\") " pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.823298 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/75a43503-e538-4964-9789-322839cc4c48-serviceca\") pod \"node-ca-7444p\" (UID: \"75a43503-e538-4964-9789-322839cc4c48\") " pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.835085 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7aca7b376d53b74769b52a2cd9d5e1f4736551b442f71e77b2fac5fb4bd1c6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"message\\\":\\\"W1121 20:06:38.621471 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1121 20:06:38.621888 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763755598 cert, and key in /tmp/serving-cert-3083069496/serving-signer.crt, /tmp/serving-cert-3083069496/serving-signer.key\\\\nI1121 20:06:38.831519 1 observer_polling.go:159] Starting file observer\\\\nW1121 20:06:38.834793 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 20:06:38.834948 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:38.836414 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3083069496/tls.crt::/tmp/serving-cert-3083069496/tls.key\\\\\\\"\\\\nF1121 20:06:39.147124 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.845814 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.853762 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.865534 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.867719 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.924389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/75a43503-e538-4964-9789-322839cc4c48-serviceca\") pod \"node-ca-7444p\" (UID: \"75a43503-e538-4964-9789-322839cc4c48\") " pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.924458 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/75a43503-e538-4964-9789-322839cc4c48-host\") pod \"node-ca-7444p\" (UID: \"75a43503-e538-4964-9789-322839cc4c48\") " pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.924477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w57k\" (UniqueName: \"kubernetes.io/projected/75a43503-e538-4964-9789-322839cc4c48-kube-api-access-9w57k\") pod \"node-ca-7444p\" (UID: \"75a43503-e538-4964-9789-322839cc4c48\") " pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.924544 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/75a43503-e538-4964-9789-322839cc4c48-host\") pod \"node-ca-7444p\" (UID: \"75a43503-e538-4964-9789-322839cc4c48\") " pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.925451 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/75a43503-e538-4964-9789-322839cc4c48-serviceca\") pod \"node-ca-7444p\" (UID: \"75a43503-e538-4964-9789-322839cc4c48\") " pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.938728 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.952363 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:56 crc kubenswrapper[4727]: I1121 20:06:56.987786 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w57k\" (UniqueName: \"kubernetes.io/projected/75a43503-e538-4964-9789-322839cc4c48-kube-api-access-9w57k\") pod \"node-ca-7444p\" (UID: \"75a43503-e538-4964-9789-322839cc4c48\") " pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.001273 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:56Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.032433 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.072432 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.116442 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.157517 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.197893 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.218480 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7444p" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.239565 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.277343 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.319734 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.355079 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.393253 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.433231 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.472680 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.498914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:06:57 crc kubenswrapper[4727]: E1121 20:06:57.499039 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.512392 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.556358 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.592843 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.630023 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01"} Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.630079 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce"} Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.630093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3"} Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.630106 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b"} Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.630118 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f"} Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.630128 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65"} Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.631331 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7444p" event={"ID":"75a43503-e538-4964-9789-322839cc4c48","Type":"ContainerStarted","Data":"6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371"} Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.631377 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7444p" event={"ID":"75a43503-e538-4964-9789-322839cc4c48","Type":"ContainerStarted","Data":"97715aa4dcf1a6e52c5211d6f79ee5b2e6778ea49fc1b46501f7ea93995f10e0"} Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.632556 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc"} Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.634630 4727 generic.go:334] "Generic (PLEG): container finished" podID="db8a4e70-c074-4f90-aebe-444078f3337f" containerID="8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5" exitCode=0 Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.634722 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" event={"ID":"db8a4e70-c074-4f90-aebe-444078f3337f","Type":"ContainerDied","Data":"8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5"} Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.638653 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: E1121 20:06:57.649517 4727 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.694248 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.731941 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.779400 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.810820 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.855014 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.892546 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.931436 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:57 crc kubenswrapper[4727]: I1121 20:06:57.975284 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:57Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.014418 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.052871 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.092752 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.133652 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.137984 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.138084 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.138131 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138182 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:07:02.138162117 +0000 UTC m=+27.324347171 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138219 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.138217 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138278 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138285 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:02.138261759 +0000 UTC m=+27.324446803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138295 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138225 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138309 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138321 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138329 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138343 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:02.138334821 +0000 UTC m=+27.324519955 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.138311 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138357 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:02.138350121 +0000 UTC m=+27.324535165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138364 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.138404 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:02.138392362 +0000 UTC m=+27.324577396 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.185893 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.215304 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.255188 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.498562 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.498583 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.498698 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:06:58 crc kubenswrapper[4727]: E1121 20:06:58.498753 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.639400 4727 generic.go:334] "Generic (PLEG): container finished" podID="db8a4e70-c074-4f90-aebe-444078f3337f" containerID="68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1" exitCode=0 Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.639474 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" event={"ID":"db8a4e70-c074-4f90-aebe-444078f3337f","Type":"ContainerDied","Data":"68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1"} Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.652399 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.665047 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.674198 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.693426 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.713607 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.728498 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.746890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.759074 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.770975 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.782217 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.793913 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.808161 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.821252 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.833094 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:58 crc kubenswrapper[4727]: I1121 20:06:58.853311 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:58Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.498509 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:06:59 crc kubenswrapper[4727]: E1121 20:06:59.498717 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.644476 4727 generic.go:334] "Generic (PLEG): container finished" podID="db8a4e70-c074-4f90-aebe-444078f3337f" containerID="ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff" exitCode=0 Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.644519 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" event={"ID":"db8a4e70-c074-4f90-aebe-444078f3337f","Type":"ContainerDied","Data":"ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff"} Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.671187 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.687154 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.708199 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.723666 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.736243 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.749701 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.760418 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.772324 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.785814 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.797074 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.806914 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.820599 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.837475 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.847360 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:06:59 crc kubenswrapper[4727]: I1121 20:06:59.859506 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:06:59Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.498790 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.498897 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:00 crc kubenswrapper[4727]: E1121 20:07:00.499165 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:00 crc kubenswrapper[4727]: E1121 20:07:00.498942 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.587702 4727 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.589721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.589757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.589775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.589884 4727 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.597213 4727 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.597449 4727 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.598573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.598605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.598613 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.598627 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.598636 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:00Z","lastTransitionTime":"2025-11-21T20:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:00 crc kubenswrapper[4727]: E1121 20:07:00.614378 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.622355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.622473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.622552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.622646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.622729 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:00Z","lastTransitionTime":"2025-11-21T20:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:00 crc kubenswrapper[4727]: E1121 20:07:00.633888 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.637143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.637170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.637179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.637193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.637204 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:00Z","lastTransitionTime":"2025-11-21T20:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:00 crc kubenswrapper[4727]: E1121 20:07:00.649147 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.651258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47"} Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.652873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.652894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.652902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.652913 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.652922 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:00Z","lastTransitionTime":"2025-11-21T20:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.655507 4727 generic.go:334] "Generic (PLEG): container finished" podID="db8a4e70-c074-4f90-aebe-444078f3337f" containerID="465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e" exitCode=0 Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.655532 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" event={"ID":"db8a4e70-c074-4f90-aebe-444078f3337f","Type":"ContainerDied","Data":"465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e"} Nov 21 20:07:00 crc kubenswrapper[4727]: E1121 20:07:00.664818 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.668438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.668469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.668487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.668506 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.668520 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:00Z","lastTransitionTime":"2025-11-21T20:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.671746 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: E1121 20:07:00.682801 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: E1121 20:07:00.682914 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.684397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.684426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.684438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.684453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.684466 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:00Z","lastTransitionTime":"2025-11-21T20:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.693170 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.708199 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.719117 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.740091 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.764414 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.780061 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.794833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.794872 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.794822 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.794881 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.795083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.795093 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:00Z","lastTransitionTime":"2025-11-21T20:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.813786 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.838294 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.850055 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.863518 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.874931 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.885108 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.897230 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:00Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.897287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.897329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.897341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.897358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.897373 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:00Z","lastTransitionTime":"2025-11-21T20:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.999848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.999888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.999898 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:00 crc kubenswrapper[4727]: I1121 20:07:00.999912 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:00.999923 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:00Z","lastTransitionTime":"2025-11-21T20:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.102391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.102436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.102447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.102466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.102476 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:01Z","lastTransitionTime":"2025-11-21T20:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.204374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.204603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.204674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.204737 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.204792 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:01Z","lastTransitionTime":"2025-11-21T20:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.308184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.308245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.308257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.308280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.308294 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:01Z","lastTransitionTime":"2025-11-21T20:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.344069 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.345143 4727 scope.go:117] "RemoveContainer" containerID="dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61" Nov 21 20:07:01 crc kubenswrapper[4727]: E1121 20:07:01.345456 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.411515 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.411763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.411827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.411888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.411945 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:01Z","lastTransitionTime":"2025-11-21T20:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.498786 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:01 crc kubenswrapper[4727]: E1121 20:07:01.499026 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.514691 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.515099 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.515334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.515481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.515617 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:01Z","lastTransitionTime":"2025-11-21T20:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.619887 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.619993 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.620016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.620048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.620067 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:01Z","lastTransitionTime":"2025-11-21T20:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.662230 4727 generic.go:334] "Generic (PLEG): container finished" podID="db8a4e70-c074-4f90-aebe-444078f3337f" containerID="ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e" exitCode=0 Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.662252 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" event={"ID":"db8a4e70-c074-4f90-aebe-444078f3337f","Type":"ContainerDied","Data":"ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e"} Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.686771 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.710299 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.722663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.722725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.722747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.722775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.722793 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:01Z","lastTransitionTime":"2025-11-21T20:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.728127 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.744930 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.763638 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.780876 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.795302 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.811350 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.826652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.826699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.826721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.826746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.826763 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:01Z","lastTransitionTime":"2025-11-21T20:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.826829 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.844010 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.858388 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.871178 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.886691 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.907808 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.921404 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:01Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.928646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.928674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.928692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.928707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:01 crc kubenswrapper[4727]: I1121 20:07:01.928716 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:01Z","lastTransitionTime":"2025-11-21T20:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.035913 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.035998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.036014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.036040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.036063 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:02Z","lastTransitionTime":"2025-11-21T20:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.139526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.139597 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.139618 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.139649 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.139670 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:02Z","lastTransitionTime":"2025-11-21T20:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.179061 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.179252 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.179327 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:07:10.179264996 +0000 UTC m=+35.365450080 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.179353 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.179403 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.179424 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:10.179402928 +0000 UTC m=+35.365587982 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.179546 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.179605 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.179795 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.179825 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.180009 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.180023 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:10.179951731 +0000 UTC m=+35.366136805 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.180028 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.180132 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:10.180104674 +0000 UTC m=+35.366289758 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.179849 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.180219 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.180242 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.180319 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:10.180302168 +0000 UTC m=+35.366487242 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.243050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.243131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.243154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.243179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.243197 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:02Z","lastTransitionTime":"2025-11-21T20:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.346501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.346575 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.346617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.346668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.346775 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:02Z","lastTransitionTime":"2025-11-21T20:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.449949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.450055 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.450077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.450108 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.450127 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:02Z","lastTransitionTime":"2025-11-21T20:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.498935 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.499018 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.499166 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:02 crc kubenswrapper[4727]: E1121 20:07:02.499258 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.554239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.554935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.554990 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.555128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.555148 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:02Z","lastTransitionTime":"2025-11-21T20:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.657258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.657301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.657314 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.657335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.657347 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:02Z","lastTransitionTime":"2025-11-21T20:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.669258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.669889 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.669929 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.669942 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.674107 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" event={"ID":"db8a4e70-c074-4f90-aebe-444078f3337f","Type":"ContainerStarted","Data":"8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.690441 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.697758 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.697837 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.702978 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.722081 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.735592 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.749032 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.759096 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.759155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.759173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.759197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.759213 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:02Z","lastTransitionTime":"2025-11-21T20:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.768460 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.780303 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.793783 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.805807 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.817872 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.837023 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.855570 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.861824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.861886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.861903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.861926 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.861943 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:02Z","lastTransitionTime":"2025-11-21T20:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.867786 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.881211 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.896386 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.907555 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.917342 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.926937 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.939135 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.956427 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.968090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.968138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.968148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.968164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.968174 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:02Z","lastTransitionTime":"2025-11-21T20:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.970425 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:02 crc kubenswrapper[4727]: I1121 20:07:02.991124 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.004698 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:03Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.018715 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:03Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.028686 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:03Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.040837 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:03Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.051866 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:03Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.070940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.071012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.071027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.071047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.071062 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:03Z","lastTransitionTime":"2025-11-21T20:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.077831 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:03Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.100126 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:03Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.116824 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:03Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.173867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.173910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.173919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.173934 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.173944 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:03Z","lastTransitionTime":"2025-11-21T20:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.275902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.275943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.275975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.275995 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.276006 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:03Z","lastTransitionTime":"2025-11-21T20:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.377617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.377657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.377666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.377681 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.377691 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:03Z","lastTransitionTime":"2025-11-21T20:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.480079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.480152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.480171 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.480203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.480218 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:03Z","lastTransitionTime":"2025-11-21T20:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.498619 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:03 crc kubenswrapper[4727]: E1121 20:07:03.498760 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.583332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.583368 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.583377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.583391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.583402 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:03Z","lastTransitionTime":"2025-11-21T20:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.685202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.685269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.685286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.685312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.685330 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:03Z","lastTransitionTime":"2025-11-21T20:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.788012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.788083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.788102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.788127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.788144 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:03Z","lastTransitionTime":"2025-11-21T20:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.890290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.890343 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.890356 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.890374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.890387 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:03Z","lastTransitionTime":"2025-11-21T20:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.993817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.993860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.993873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.993897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:03 crc kubenswrapper[4727]: I1121 20:07:03.993910 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:03Z","lastTransitionTime":"2025-11-21T20:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.096667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.096707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.096715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.096729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.096740 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:04Z","lastTransitionTime":"2025-11-21T20:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.198902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.198936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.198948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.198980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.198991 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:04Z","lastTransitionTime":"2025-11-21T20:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.301487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.301529 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.301540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.301557 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.301569 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:04Z","lastTransitionTime":"2025-11-21T20:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.403803 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.403836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.403844 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.403858 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.403867 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:04Z","lastTransitionTime":"2025-11-21T20:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.498532 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.498553 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:04 crc kubenswrapper[4727]: E1121 20:07:04.498666 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:04 crc kubenswrapper[4727]: E1121 20:07:04.498741 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.505424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.505605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.505685 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.505755 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.506129 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:04Z","lastTransitionTime":"2025-11-21T20:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.607970 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.608037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.608050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.608069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.608082 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:04Z","lastTransitionTime":"2025-11-21T20:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.681887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/0.log" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.684318 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977" exitCode=1 Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.684364 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977"} Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.684934 4727 scope.go:117] "RemoveContainer" containerID="4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.695835 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.712509 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.712545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.712555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.712570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.712580 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:04Z","lastTransitionTime":"2025-11-21T20:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.712537 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.724125 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.733509 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.748045 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.767312 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:04Z\\\",\\\"message\\\":\\\"7:04.581860 6005 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 20:07:04.581882 6005 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 20:07:04.581880 6005 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 20:07:04.581900 6005 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 20:07:04.581907 6005 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 20:07:04.581935 6005 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 20:07:04.581947 6005 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 20:07:04.581952 6005 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 20:07:04.581980 6005 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 20:07:04.581992 6005 factory.go:656] Stopping watch factory\\\\nI1121 20:07:04.581995 6005 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 20:07:04.582004 6005 ovnkube.go:599] Stopped ovnkube\\\\nI1121 20:07:04.582005 6005 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 20:07:04.582016 6005 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 20:07:04.582023 6005 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 20:07:04.582025 6005 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.776461 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.789528 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.809587 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.814544 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.814566 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.814574 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.814587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.814596 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:04Z","lastTransitionTime":"2025-11-21T20:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.831381 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.842775 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.854524 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.866554 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.879262 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.892344 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:04Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.917201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.917243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.917254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.917269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:04 crc kubenswrapper[4727]: I1121 20:07:04.917282 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:04Z","lastTransitionTime":"2025-11-21T20:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.019319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.019347 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.019356 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.019374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.019382 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:05Z","lastTransitionTime":"2025-11-21T20:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.120759 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.120805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.120817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.120835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.120849 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:05Z","lastTransitionTime":"2025-11-21T20:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.222761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.222815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.222828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.222852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.222868 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:05Z","lastTransitionTime":"2025-11-21T20:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.325557 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.325588 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.325597 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.325609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.325618 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:05Z","lastTransitionTime":"2025-11-21T20:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.427937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.427993 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.428003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.428019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.428029 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:05Z","lastTransitionTime":"2025-11-21T20:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.498597 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:05 crc kubenswrapper[4727]: E1121 20:07:05.498713 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.514585 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.530704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.530744 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.530754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.530768 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.530777 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:05Z","lastTransitionTime":"2025-11-21T20:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.532907 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:04Z\\\",\\\"message\\\":\\\"7:04.581860 6005 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 20:07:04.581882 6005 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 20:07:04.581880 6005 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 20:07:04.581900 6005 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 20:07:04.581907 6005 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 20:07:04.581935 6005 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 20:07:04.581947 6005 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 20:07:04.581952 6005 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 20:07:04.581980 6005 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 20:07:04.581992 6005 factory.go:656] Stopping watch factory\\\\nI1121 20:07:04.581995 6005 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 20:07:04.582004 6005 ovnkube.go:599] Stopped ovnkube\\\\nI1121 20:07:04.582005 6005 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 20:07:04.582016 6005 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 20:07:04.582023 6005 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 20:07:04.582025 6005 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.542892 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.555320 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.567174 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.580929 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.591131 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.602133 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.613203 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.625491 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.632541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.632580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.632591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.632605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.632616 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:05Z","lastTransitionTime":"2025-11-21T20:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.638905 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.657127 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.672520 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.685013 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.688279 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/1.log" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.688810 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/0.log" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.691129 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720" exitCode=1 Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.691168 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.691205 4727 scope.go:117] "RemoveContainer" containerID="4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.691950 4727 scope.go:117] "RemoveContainer" containerID="0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720" Nov 21 20:07:05 crc kubenswrapper[4727]: E1121 20:07:05.692167 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.702271 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.714719 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.728437 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.734556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.734692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.734751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.734807 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.734864 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:05Z","lastTransitionTime":"2025-11-21T20:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.741222 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.752765 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.762502 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.777757 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.794630 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4af19875f0693a845b5f41d02ca0996608550af14b0a9b567da38e51e520c977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:04Z\\\",\\\"message\\\":\\\"7:04.581860 6005 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 20:07:04.581882 6005 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 20:07:04.581880 6005 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 20:07:04.581900 6005 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 20:07:04.581907 6005 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 20:07:04.581935 6005 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1121 20:07:04.581947 6005 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1121 20:07:04.581952 6005 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1121 20:07:04.581980 6005 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 20:07:04.581992 6005 factory.go:656] Stopping watch factory\\\\nI1121 20:07:04.581995 6005 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 20:07:04.582004 6005 ovnkube.go:599] Stopped ovnkube\\\\nI1121 20:07:04.582005 6005 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 20:07:04.582016 6005 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1121 20:07:04.582023 6005 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 20:07:04.582025 6005 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:05Z\\\",\\\"message\\\":\\\")\\\\nI1121 20:07:05.650928 6127 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1121 20:07:05.650932 6127 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1121 20:07:05.650760 6127 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1121 20:07:05.650938 6127 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF1121 20:07:05.650939 6127 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.804918 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.829387 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.836593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.836629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.836641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.836655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.836666 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:05Z","lastTransitionTime":"2025-11-21T20:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.847541 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.858221 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.868812 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.879512 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.889600 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.899814 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.938628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.938666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.938677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.938693 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:05 crc kubenswrapper[4727]: I1121 20:07:05.938705 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:05Z","lastTransitionTime":"2025-11-21T20:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.042063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.042119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.042137 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.042159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.042176 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:06Z","lastTransitionTime":"2025-11-21T20:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.145291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.145357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.145379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.145409 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.145431 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:06Z","lastTransitionTime":"2025-11-21T20:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.249033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.249102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.249128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.249159 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.249178 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:06Z","lastTransitionTime":"2025-11-21T20:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.351690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.351741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.351752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.351772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.351786 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:06Z","lastTransitionTime":"2025-11-21T20:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.454857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.454910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.454924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.454943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.454976 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:06Z","lastTransitionTime":"2025-11-21T20:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.498665 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.498747 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:06 crc kubenswrapper[4727]: E1121 20:07:06.498788 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:06 crc kubenswrapper[4727]: E1121 20:07:06.498986 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.560948 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.561028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.561047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.561070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.561089 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:06Z","lastTransitionTime":"2025-11-21T20:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.663126 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.663162 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.663172 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.663184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.663192 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:06Z","lastTransitionTime":"2025-11-21T20:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.696686 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/1.log" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.703400 4727 scope.go:117] "RemoveContainer" containerID="0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720" Nov 21 20:07:06 crc kubenswrapper[4727]: E1121 20:07:06.703726 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.729937 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.747381 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.761226 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.765936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.765987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.765996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.766014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.766025 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:06Z","lastTransitionTime":"2025-11-21T20:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.778785 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:05Z\\\",\\\"message\\\":\\\")\\\\nI1121 20:07:05.650928 6127 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1121 20:07:05.650932 6127 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1121 20:07:05.650760 6127 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1121 20:07:05.650938 6127 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF1121 20:07:05.650939 6127 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.789146 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.807283 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.822950 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.833908 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.844137 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.854432 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.867095 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.869160 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.869197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.869207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.869224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.869235 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:06Z","lastTransitionTime":"2025-11-21T20:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.878134 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.890415 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.911281 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.927483 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:06Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.971731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.971802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.971824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.971853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:06 crc kubenswrapper[4727]: I1121 20:07:06.971879 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:06Z","lastTransitionTime":"2025-11-21T20:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.074240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.074288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.074303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.074323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.074337 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:07Z","lastTransitionTime":"2025-11-21T20:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.177331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.177369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.177379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.177394 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.177404 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:07Z","lastTransitionTime":"2025-11-21T20:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.280005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.280052 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.280066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.280084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.280098 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:07Z","lastTransitionTime":"2025-11-21T20:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.382459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.382508 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.382524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.382548 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.382564 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:07Z","lastTransitionTime":"2025-11-21T20:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.447564 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx"] Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.448109 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.451008 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.451123 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.468456 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.485106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.485165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.485182 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.485205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.485235 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:07Z","lastTransitionTime":"2025-11-21T20:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.487409 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.498500 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:07 crc kubenswrapper[4727]: E1121 20:07:07.498701 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.506674 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.523082 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.533289 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5c073d13-aa4f-401c-9684-36980fe94cb5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.533345 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5c073d13-aa4f-401c-9684-36980fe94cb5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.533452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5c073d13-aa4f-401c-9684-36980fe94cb5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.533513 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hssmd\" (UniqueName: \"kubernetes.io/projected/5c073d13-aa4f-401c-9684-36980fe94cb5-kube-api-access-hssmd\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.544165 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.561266 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:05Z\\\",\\\"message\\\":\\\")\\\\nI1121 20:07:05.650928 6127 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1121 20:07:05.650932 6127 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1121 20:07:05.650760 6127 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1121 20:07:05.650938 6127 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF1121 20:07:05.650939 6127 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.571883 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.585907 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.587605 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.587648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.587663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.587680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.587692 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:07Z","lastTransitionTime":"2025-11-21T20:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.603636 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.633263 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.634533 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5c073d13-aa4f-401c-9684-36980fe94cb5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.634607 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hssmd\" (UniqueName: \"kubernetes.io/projected/5c073d13-aa4f-401c-9684-36980fe94cb5-kube-api-access-hssmd\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.634661 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5c073d13-aa4f-401c-9684-36980fe94cb5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.634696 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5c073d13-aa4f-401c-9684-36980fe94cb5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.635497 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5c073d13-aa4f-401c-9684-36980fe94cb5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.635673 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5c073d13-aa4f-401c-9684-36980fe94cb5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.643081 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5c073d13-aa4f-401c-9684-36980fe94cb5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.648130 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.663497 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hssmd\" (UniqueName: \"kubernetes.io/projected/5c073d13-aa4f-401c-9684-36980fe94cb5-kube-api-access-hssmd\") pod \"ovnkube-control-plane-749d76644c-drnwx\" (UID: \"5c073d13-aa4f-401c-9684-36980fe94cb5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.663725 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.676171 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.690100 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.690984 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.691028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.691042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.691062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.691077 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:07Z","lastTransitionTime":"2025-11-21T20:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.702087 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.714515 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:07Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.767179 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.793638 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.793707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.793731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.793762 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.793787 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:07Z","lastTransitionTime":"2025-11-21T20:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.897089 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.897130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.897139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.897156 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.897165 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:07Z","lastTransitionTime":"2025-11-21T20:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.999687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.999748 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.999765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.999788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:07 crc kubenswrapper[4727]: I1121 20:07:07.999805 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:07Z","lastTransitionTime":"2025-11-21T20:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.102793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.102856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.102874 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.102900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.102919 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:08Z","lastTransitionTime":"2025-11-21T20:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.205784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.205836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.205848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.205877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.205892 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:08Z","lastTransitionTime":"2025-11-21T20:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.308242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.308280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.308291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.308307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.308318 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:08Z","lastTransitionTime":"2025-11-21T20:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.410803 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.410847 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.410859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.410876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.410887 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:08Z","lastTransitionTime":"2025-11-21T20:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.498730 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:08 crc kubenswrapper[4727]: E1121 20:07:08.498874 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.498745 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:08 crc kubenswrapper[4727]: E1121 20:07:08.499107 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.513074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.513118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.513132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.513151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.513166 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:08Z","lastTransitionTime":"2025-11-21T20:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.535630 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rs9rv"] Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.536129 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:08 crc kubenswrapper[4727]: E1121 20:07:08.536197 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.548253 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.558259 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.568068 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.581591 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.601388 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:05Z\\\",\\\"message\\\":\\\")\\\\nI1121 20:07:05.650928 6127 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1121 20:07:05.650932 6127 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1121 20:07:05.650760 6127 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1121 20:07:05.650938 6127 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF1121 20:07:05.650939 6127 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.610548 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.615280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.615321 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.615332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.615348 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.615360 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:08Z","lastTransitionTime":"2025-11-21T20:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.620732 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.641021 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.643952 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.644016 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bcnw\" (UniqueName: \"kubernetes.io/projected/f8318f96-4402-4567-a432-6cf3897e218d-kube-api-access-8bcnw\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.654994 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.666482 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.678348 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.691657 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.702659 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.708230 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" event={"ID":"5c073d13-aa4f-401c-9684-36980fe94cb5","Type":"ContainerStarted","Data":"ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.708296 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" event={"ID":"5c073d13-aa4f-401c-9684-36980fe94cb5","Type":"ContainerStarted","Data":"2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.708318 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" event={"ID":"5c073d13-aa4f-401c-9684-36980fe94cb5","Type":"ContainerStarted","Data":"717fc7f79eedcc55a7048956aa536e016634e42795a94a12e9816ac708ef602d"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.717233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.717272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.717286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.717303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.717315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:08Z","lastTransitionTime":"2025-11-21T20:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.717711 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.731619 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.743763 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.745380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.745466 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bcnw\" (UniqueName: \"kubernetes.io/projected/f8318f96-4402-4567-a432-6cf3897e218d-kube-api-access-8bcnw\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:08 crc kubenswrapper[4727]: E1121 20:07:08.745647 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:08 crc kubenswrapper[4727]: E1121 20:07:08.745714 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs podName:f8318f96-4402-4567-a432-6cf3897e218d nodeName:}" failed. No retries permitted until 2025-11-21 20:07:09.245696407 +0000 UTC m=+34.431881461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs") pod "network-metrics-daemon-rs9rv" (UID: "f8318f96-4402-4567-a432-6cf3897e218d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.754147 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.761870 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bcnw\" (UniqueName: \"kubernetes.io/projected/f8318f96-4402-4567-a432-6cf3897e218d-kube-api-access-8bcnw\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.775124 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:05Z\\\",\\\"message\\\":\\\")\\\\nI1121 20:07:05.650928 6127 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1121 20:07:05.650932 6127 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1121 20:07:05.650760 6127 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1121 20:07:05.650938 6127 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF1121 20:07:05.650939 6127 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.786919 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.801186 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.814341 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.819279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.819332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.819350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.819374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.819392 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:08Z","lastTransitionTime":"2025-11-21T20:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.825623 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.837702 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.855070 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.871763 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.884101 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.898125 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.912152 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.921747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.921945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.922084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.922182 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.922270 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:08Z","lastTransitionTime":"2025-11-21T20:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.942853 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.963711 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:08 crc kubenswrapper[4727]: I1121 20:07:08.983754 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:08Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.023174 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:09Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.025254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.025309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.025327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.025355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.025372 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:09Z","lastTransitionTime":"2025-11-21T20:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.051225 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:09Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.067119 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:09Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.128170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.128213 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.128224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.128242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.128255 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:09Z","lastTransitionTime":"2025-11-21T20:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.231265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.231307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.231318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.231333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.231342 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:09Z","lastTransitionTime":"2025-11-21T20:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.249658 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:09 crc kubenswrapper[4727]: E1121 20:07:09.249820 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:09 crc kubenswrapper[4727]: E1121 20:07:09.249901 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs podName:f8318f96-4402-4567-a432-6cf3897e218d nodeName:}" failed. No retries permitted until 2025-11-21 20:07:10.249879396 +0000 UTC m=+35.436064460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs") pod "network-metrics-daemon-rs9rv" (UID: "f8318f96-4402-4567-a432-6cf3897e218d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.334165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.334211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.334227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.334250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.334267 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:09Z","lastTransitionTime":"2025-11-21T20:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.436650 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.436694 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.436706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.436723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.436734 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:09Z","lastTransitionTime":"2025-11-21T20:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.498950 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:09 crc kubenswrapper[4727]: E1121 20:07:09.499171 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.539316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.539862 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.540044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.540201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.540334 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:09Z","lastTransitionTime":"2025-11-21T20:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.643035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.643100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.643121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.643145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.643161 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:09Z","lastTransitionTime":"2025-11-21T20:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.746152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.746214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.746238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.746266 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.746288 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:09Z","lastTransitionTime":"2025-11-21T20:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.849930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.850022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.850040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.850060 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.850085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:09Z","lastTransitionTime":"2025-11-21T20:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.952375 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.952459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.952485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.952517 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:09 crc kubenswrapper[4727]: I1121 20:07:09.952539 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:09Z","lastTransitionTime":"2025-11-21T20:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.055147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.055199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.055214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.055238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.055255 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.158192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.158241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.158252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.158269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.158286 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.260848 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.260985 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.261016 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261056 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:07:26.261018454 +0000 UTC m=+51.447203538 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.261115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261148 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261164 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261173 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.261185 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261193 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.261241 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.261287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261250 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261317 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.261311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261197 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.261380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.261414 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261200 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.261443 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261272 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261406 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs podName:f8318f96-4402-4567-a432-6cf3897e218d nodeName:}" failed. No retries permitted until 2025-11-21 20:07:12.261346552 +0000 UTC m=+37.447531616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs") pod "network-metrics-daemon-rs9rv" (UID: "f8318f96-4402-4567-a432-6cf3897e218d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261687 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:26.261667228 +0000 UTC m=+51.447852292 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261708 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:26.261698509 +0000 UTC m=+51.447883563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261730 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:26.26172106 +0000 UTC m=+51.447906114 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.261754 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:26.26174516 +0000 UTC m=+51.447930214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.364452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.364525 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.364548 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.364577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.364599 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.467886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.468006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.468031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.468062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.468083 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.498543 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.498702 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.498544 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.498754 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.498879 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.499058 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.571315 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.571369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.571385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.571403 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.571418 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.674532 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.674584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.674606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.674629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.674646 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.777130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.777217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.777249 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.777279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.777299 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.880325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.880398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.880417 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.880450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.880469 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.921059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.921129 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.921148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.921173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.921193 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.941665 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:10Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.952333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.952391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.952410 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.952434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.952450 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:10 crc kubenswrapper[4727]: E1121 20:07:10.973422 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:10Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.978897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.978944 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.978994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.979022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:10 crc kubenswrapper[4727]: I1121 20:07:10.979040 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:10Z","lastTransitionTime":"2025-11-21T20:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: E1121 20:07:11.001642 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:10Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.006891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.006954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.007007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.007032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.007049 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: E1121 20:07:11.029221 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:11Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.034474 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.034549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.034572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.034600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.034617 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: E1121 20:07:11.055775 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:11Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:11 crc kubenswrapper[4727]: E1121 20:07:11.056012 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.058173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.058227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.058244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.058267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.058286 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.161450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.161545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.161566 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.161817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.161838 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.265282 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.265348 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.265371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.265402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.265420 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.368408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.368472 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.368492 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.368516 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.368534 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.471379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.471441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.471460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.471485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.471505 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.498522 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:11 crc kubenswrapper[4727]: E1121 20:07:11.498719 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.575071 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.575145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.575169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.575252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.575277 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.679167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.679276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.679375 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.679400 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.679416 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.782513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.782584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.782602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.782631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.782648 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.885476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.885610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.885639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.885747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.886005 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.989138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.989178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.989193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.989227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:11 crc kubenswrapper[4727]: I1121 20:07:11.989238 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:11Z","lastTransitionTime":"2025-11-21T20:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.092162 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.092233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.092256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.092281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.092303 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:12Z","lastTransitionTime":"2025-11-21T20:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.194733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.194798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.194811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.194827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.194863 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:12Z","lastTransitionTime":"2025-11-21T20:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.286767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:12 crc kubenswrapper[4727]: E1121 20:07:12.286990 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:12 crc kubenswrapper[4727]: E1121 20:07:12.287105 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs podName:f8318f96-4402-4567-a432-6cf3897e218d nodeName:}" failed. No retries permitted until 2025-11-21 20:07:16.287073252 +0000 UTC m=+41.473258356 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs") pod "network-metrics-daemon-rs9rv" (UID: "f8318f96-4402-4567-a432-6cf3897e218d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.298080 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.298152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.298176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.298207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.298233 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:12Z","lastTransitionTime":"2025-11-21T20:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.401542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.401584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.401595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.401611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.401623 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:12Z","lastTransitionTime":"2025-11-21T20:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.498937 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.498985 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.499039 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:12 crc kubenswrapper[4727]: E1121 20:07:12.499069 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:12 crc kubenswrapper[4727]: E1121 20:07:12.499169 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:12 crc kubenswrapper[4727]: E1121 20:07:12.499779 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.504528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.504639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.504703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.504743 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.504765 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:12Z","lastTransitionTime":"2025-11-21T20:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.606683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.606715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.606725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.606739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.606756 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:12Z","lastTransitionTime":"2025-11-21T20:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.709980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.710028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.710037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.710050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.710059 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:12Z","lastTransitionTime":"2025-11-21T20:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.811988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.812031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.812042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.812056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.812067 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:12Z","lastTransitionTime":"2025-11-21T20:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.914054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.914090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.914099 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.914112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:12 crc kubenswrapper[4727]: I1121 20:07:12.914121 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:12Z","lastTransitionTime":"2025-11-21T20:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.016896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.016949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.016992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.017013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.017026 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:13Z","lastTransitionTime":"2025-11-21T20:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.119799 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.119847 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.119855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.119869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.119879 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:13Z","lastTransitionTime":"2025-11-21T20:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.223371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.223449 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.223469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.223497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.223515 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:13Z","lastTransitionTime":"2025-11-21T20:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.326413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.326483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.326501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.326528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.326549 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:13Z","lastTransitionTime":"2025-11-21T20:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.429261 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.429309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.429330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.429376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.429397 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:13Z","lastTransitionTime":"2025-11-21T20:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.499230 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:13 crc kubenswrapper[4727]: E1121 20:07:13.499436 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.532127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.532211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.532235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.532264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.532288 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:13Z","lastTransitionTime":"2025-11-21T20:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.635850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.635913 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.635931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.635954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.636000 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:13Z","lastTransitionTime":"2025-11-21T20:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.739075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.739135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.739151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.739175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.739193 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:13Z","lastTransitionTime":"2025-11-21T20:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.841366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.841407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.841418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.841433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.841444 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:13Z","lastTransitionTime":"2025-11-21T20:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.943411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.943453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.943464 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.943481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:13 crc kubenswrapper[4727]: I1121 20:07:13.943493 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:13Z","lastTransitionTime":"2025-11-21T20:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.045541 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.045610 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.045634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.045665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.045691 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:14Z","lastTransitionTime":"2025-11-21T20:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.148840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.148892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.148908 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.148933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.148950 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:14Z","lastTransitionTime":"2025-11-21T20:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.251224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.251295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.251313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.251340 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.251358 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:14Z","lastTransitionTime":"2025-11-21T20:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.353694 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.353759 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.353778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.353803 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.353820 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:14Z","lastTransitionTime":"2025-11-21T20:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.457130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.457216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.457241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.457268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.457289 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:14Z","lastTransitionTime":"2025-11-21T20:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.498828 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.498870 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.498858 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:14 crc kubenswrapper[4727]: E1121 20:07:14.499064 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:14 crc kubenswrapper[4727]: E1121 20:07:14.499157 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:14 crc kubenswrapper[4727]: E1121 20:07:14.499211 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.559344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.559416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.559439 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.559468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.559491 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:14Z","lastTransitionTime":"2025-11-21T20:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.661935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.662294 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.662430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.662551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.662672 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:14Z","lastTransitionTime":"2025-11-21T20:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.765895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.765994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.766014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.766039 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.766056 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:14Z","lastTransitionTime":"2025-11-21T20:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.868083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.868131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.868145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.868165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.868180 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:14Z","lastTransitionTime":"2025-11-21T20:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.972352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.972415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.972438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.972469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:14 crc kubenswrapper[4727]: I1121 20:07:14.972493 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:14Z","lastTransitionTime":"2025-11-21T20:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.076393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.076442 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.076457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.076477 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.076495 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:15Z","lastTransitionTime":"2025-11-21T20:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.179137 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.179177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.179187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.179204 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.179215 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:15Z","lastTransitionTime":"2025-11-21T20:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.282001 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.282110 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.282129 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.282149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.282162 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:15Z","lastTransitionTime":"2025-11-21T20:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.385192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.385228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.385235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.385248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.385257 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:15Z","lastTransitionTime":"2025-11-21T20:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.487307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.487370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.487380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.487393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.487401 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:15Z","lastTransitionTime":"2025-11-21T20:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.498611 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:15 crc kubenswrapper[4727]: E1121 20:07:15.498724 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.515029 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.529357 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.540016 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.552519 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.579770 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:05Z\\\",\\\"message\\\":\\\")\\\\nI1121 20:07:05.650928 6127 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1121 20:07:05.650932 6127 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1121 20:07:05.650760 6127 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1121 20:07:05.650938 6127 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF1121 20:07:05.650939 6127 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.589067 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.589214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.589236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.589244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.589290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.589301 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:15Z","lastTransitionTime":"2025-11-21T20:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.598871 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.609341 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.621891 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.630559 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.641000 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.651253 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.660868 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.670457 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.681849 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.690701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.690727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.690735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.690767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.690778 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:15Z","lastTransitionTime":"2025-11-21T20:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.698712 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.711053 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:15Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.792237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.792276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.792288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.792304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.792315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:15Z","lastTransitionTime":"2025-11-21T20:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.894855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.894894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.894903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.894917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.894927 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:15Z","lastTransitionTime":"2025-11-21T20:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.998201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.998237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.998245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.998260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:15 crc kubenswrapper[4727]: I1121 20:07:15.998271 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:15Z","lastTransitionTime":"2025-11-21T20:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.100238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.100271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.100279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.100291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.100299 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:16Z","lastTransitionTime":"2025-11-21T20:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.203008 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.203056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.203069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.203086 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.203101 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:16Z","lastTransitionTime":"2025-11-21T20:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.306165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.306206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.306221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.306242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.306259 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:16Z","lastTransitionTime":"2025-11-21T20:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.326819 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:16 crc kubenswrapper[4727]: E1121 20:07:16.326942 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:16 crc kubenswrapper[4727]: E1121 20:07:16.327182 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs podName:f8318f96-4402-4567-a432-6cf3897e218d nodeName:}" failed. No retries permitted until 2025-11-21 20:07:24.327162669 +0000 UTC m=+49.513347713 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs") pod "network-metrics-daemon-rs9rv" (UID: "f8318f96-4402-4567-a432-6cf3897e218d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.408425 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.408458 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.408467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.408479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.408488 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:16Z","lastTransitionTime":"2025-11-21T20:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.498768 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.498796 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.498768 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:16 crc kubenswrapper[4727]: E1121 20:07:16.499049 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:16 crc kubenswrapper[4727]: E1121 20:07:16.499261 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.499299 4727 scope.go:117] "RemoveContainer" containerID="dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61" Nov 21 20:07:16 crc kubenswrapper[4727]: E1121 20:07:16.499370 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.510337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.510375 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.510388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.510404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.510418 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:16Z","lastTransitionTime":"2025-11-21T20:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.612861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.612896 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.612903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.612916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.612924 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:16Z","lastTransitionTime":"2025-11-21T20:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.716282 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.716322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.716332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.716346 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.716357 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:16Z","lastTransitionTime":"2025-11-21T20:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.735871 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.738041 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.738287 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.750439 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.762325 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.776704 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.791098 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.803645 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.818327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.818357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.818366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.818384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.818393 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:16Z","lastTransitionTime":"2025-11-21T20:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.821246 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.843298 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:05Z\\\",\\\"message\\\":\\\")\\\\nI1121 20:07:05.650928 6127 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1121 20:07:05.650932 6127 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1121 20:07:05.650760 6127 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1121 20:07:05.650938 6127 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF1121 20:07:05.650939 6127 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.854612 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.867372 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.880991 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.900807 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.916490 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.919809 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.919849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.919861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.919877 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.919889 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:16Z","lastTransitionTime":"2025-11-21T20:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.934155 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.946618 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.960750 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.974420 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:16 crc kubenswrapper[4727]: I1121 20:07:16.988451 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:16Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.022300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.022336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.022345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.022359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.022370 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:17Z","lastTransitionTime":"2025-11-21T20:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.124696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.124740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.124747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.124762 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.124772 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:17Z","lastTransitionTime":"2025-11-21T20:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.227317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.227375 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.227392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.227413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.227431 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:17Z","lastTransitionTime":"2025-11-21T20:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.330190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.330241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.330252 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.330273 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.330285 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:17Z","lastTransitionTime":"2025-11-21T20:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.433046 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.433074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.433081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.433095 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.433103 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:17Z","lastTransitionTime":"2025-11-21T20:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.498686 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:17 crc kubenswrapper[4727]: E1121 20:07:17.499183 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.499555 4727 scope.go:117] "RemoveContainer" containerID="0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.535267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.535313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.535331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.535355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.535372 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:17Z","lastTransitionTime":"2025-11-21T20:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.637388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.637686 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.637698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.637712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.637722 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:17Z","lastTransitionTime":"2025-11-21T20:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.739352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.739386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.739394 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.739408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.739417 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:17Z","lastTransitionTime":"2025-11-21T20:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.742244 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/1.log" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.744919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.745530 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.760379 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.777320 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.787804 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.799809 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.811060 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.820518 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.832944 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.841863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.841902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.841912 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.841927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.841939 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:17Z","lastTransitionTime":"2025-11-21T20:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.851113 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:05Z\\\",\\\"message\\\":\\\")\\\\nI1121 20:07:05.650928 6127 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1121 20:07:05.650932 6127 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1121 20:07:05.650760 6127 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1121 20:07:05.650938 6127 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF1121 20:07:05.650939 6127 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.859590 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.868093 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.887334 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.900583 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.912021 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.926473 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.938073 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.943727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.943761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.943772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.943787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.943797 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:17Z","lastTransitionTime":"2025-11-21T20:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.949065 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:17 crc kubenswrapper[4727]: I1121 20:07:17.960526 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:17Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.046570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.046617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.046632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.046653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.046665 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:18Z","lastTransitionTime":"2025-11-21T20:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.148074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.148106 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.148131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.148147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.148156 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:18Z","lastTransitionTime":"2025-11-21T20:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.249535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.249576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.249587 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.249606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.249617 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:18Z","lastTransitionTime":"2025-11-21T20:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.352235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.352272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.352285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.352303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.352315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:18Z","lastTransitionTime":"2025-11-21T20:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.454423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.454458 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.454467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.454482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.454491 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:18Z","lastTransitionTime":"2025-11-21T20:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.498328 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.498409 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.498438 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:18 crc kubenswrapper[4727]: E1121 20:07:18.498537 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:18 crc kubenswrapper[4727]: E1121 20:07:18.498758 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:18 crc kubenswrapper[4727]: E1121 20:07:18.498888 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.557272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.557303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.557311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.557324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.557332 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:18Z","lastTransitionTime":"2025-11-21T20:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.659340 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.659371 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.659379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.659392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.659401 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:18Z","lastTransitionTime":"2025-11-21T20:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.750430 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/2.log" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.751374 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/1.log" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.754269 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04" exitCode=1 Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.754300 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.754333 4727 scope.go:117] "RemoveContainer" containerID="0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.755445 4727 scope.go:117] "RemoveContainer" containerID="6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04" Nov 21 20:07:18 crc kubenswrapper[4727]: E1121 20:07:18.755723 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.764625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.764658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.764667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.764682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.764692 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:18Z","lastTransitionTime":"2025-11-21T20:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.778510 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.789498 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.802919 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.816655 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.829149 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.838598 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.849501 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.862624 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.866275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.866319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.866330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.866345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.866358 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:18Z","lastTransitionTime":"2025-11-21T20:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.875556 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.886565 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.899444 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.954050 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.967476 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.968463 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.968498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.968510 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.968524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.968533 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:18Z","lastTransitionTime":"2025-11-21T20:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:18 crc kubenswrapper[4727]: I1121 20:07:18.984262 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.003512 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f79346c51ffca5d379c209b668becf14adfd042f222d7ec2c0cd80ac2755720\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:05Z\\\",\\\"message\\\":\\\")\\\\nI1121 20:07:05.650928 6127 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1121 20:07:05.650932 6127 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1121 20:07:05.650760 6127 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1121 20:07:05.650938 6127 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF1121 20:07:05.650939 6127 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.013351 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.024795 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.070773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.070802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.070812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.070828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.070841 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:19Z","lastTransitionTime":"2025-11-21T20:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.173470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.173505 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.173514 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.173528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.173536 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:19Z","lastTransitionTime":"2025-11-21T20:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.275756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.275805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.275817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.275836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.275847 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:19Z","lastTransitionTime":"2025-11-21T20:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.377803 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.377844 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.377852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.377868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.377877 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:19Z","lastTransitionTime":"2025-11-21T20:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.480334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.480469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.480483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.480523 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.480536 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:19Z","lastTransitionTime":"2025-11-21T20:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.499122 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:19 crc kubenswrapper[4727]: E1121 20:07:19.499250 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.583369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.583412 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.583426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.583444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.583456 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:19Z","lastTransitionTime":"2025-11-21T20:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.685715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.685752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.685763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.685779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.685791 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:19Z","lastTransitionTime":"2025-11-21T20:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.761187 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/2.log" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.765447 4727 scope.go:117] "RemoveContainer" containerID="6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04" Nov 21 20:07:19 crc kubenswrapper[4727]: E1121 20:07:19.765674 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.782438 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.787537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.787598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.787615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.787632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.787644 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:19Z","lastTransitionTime":"2025-11-21T20:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.802421 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.817939 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.832800 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.850868 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.884275 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.890733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.890792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.890810 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.890834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.890853 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:19Z","lastTransitionTime":"2025-11-21T20:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.901806 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.921290 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.947843 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.972162 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.988634 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:19Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.993570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.993622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.993646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.993678 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:19 crc kubenswrapper[4727]: I1121 20:07:19.993702 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:19Z","lastTransitionTime":"2025-11-21T20:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.005864 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:20Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.023135 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:20Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.036654 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:20Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.049668 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:20Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.068687 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:20Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.084098 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:20Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.097204 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.097264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.097281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.097304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.097323 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:20Z","lastTransitionTime":"2025-11-21T20:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.200472 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.200523 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.200537 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.200557 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.200574 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:20Z","lastTransitionTime":"2025-11-21T20:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.303827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.303905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.303930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.304015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.304047 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:20Z","lastTransitionTime":"2025-11-21T20:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.407317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.407369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.407381 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.407399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.407412 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:20Z","lastTransitionTime":"2025-11-21T20:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.498463 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.498503 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:20 crc kubenswrapper[4727]: E1121 20:07:20.498680 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.498528 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:20 crc kubenswrapper[4727]: E1121 20:07:20.498814 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:20 crc kubenswrapper[4727]: E1121 20:07:20.499178 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.510705 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.510741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.510754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.510773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.510787 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:20Z","lastTransitionTime":"2025-11-21T20:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.613703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.613745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.613756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.613772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.613784 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:20Z","lastTransitionTime":"2025-11-21T20:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.716539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.716582 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.716591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.716609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.716618 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:20Z","lastTransitionTime":"2025-11-21T20:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.819227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.819286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.819305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.819330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.819345 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:20Z","lastTransitionTime":"2025-11-21T20:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.922049 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.922116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.922165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.922195 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:20 crc kubenswrapper[4727]: I1121 20:07:20.922216 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:20Z","lastTransitionTime":"2025-11-21T20:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.024980 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.025029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.025038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.025055 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.025067 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.128068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.128135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.128152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.128177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.128261 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.156682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.156762 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.156784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.156808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.156825 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: E1121 20:07:21.172616 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:21Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.177485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.177526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.177544 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.177568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.177585 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: E1121 20:07:21.195873 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:21Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.200671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.200715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.200730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.200751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.200769 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: E1121 20:07:21.219889 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:21Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.224564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.224612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.224628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.224652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.224673 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: E1121 20:07:21.244534 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:21Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.249364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.249407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.249424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.249446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.249467 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: E1121 20:07:21.264690 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:21Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:21 crc kubenswrapper[4727]: E1121 20:07:21.264925 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.266813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.266850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.266860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.266875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.266887 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.370399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.370451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.370465 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.370481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.370490 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.473113 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.473151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.473170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.473188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.473200 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.498857 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:21 crc kubenswrapper[4727]: E1121 20:07:21.498994 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.575308 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.575360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.575376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.575396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.575409 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.677807 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.677861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.677870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.677884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.677895 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.779642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.779688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.779700 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.779718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.779730 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.882831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.882884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.882897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.882918 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.882930 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.985131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.985221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.985234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.985250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:21 crc kubenswrapper[4727]: I1121 20:07:21.985262 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:21Z","lastTransitionTime":"2025-11-21T20:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.087766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.087816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.087832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.087855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.087872 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:22Z","lastTransitionTime":"2025-11-21T20:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.190121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.190169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.190184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.190204 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.190221 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:22Z","lastTransitionTime":"2025-11-21T20:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.293558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.293645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.293677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.293707 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.293728 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:22Z","lastTransitionTime":"2025-11-21T20:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.396405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.396472 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.396499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.396528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.396552 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:22Z","lastTransitionTime":"2025-11-21T20:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.498596 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.498636 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.498665 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:22 crc kubenswrapper[4727]: E1121 20:07:22.498792 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.498876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.498903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.498919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.498942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.498989 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:22Z","lastTransitionTime":"2025-11-21T20:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:22 crc kubenswrapper[4727]: E1121 20:07:22.499170 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:22 crc kubenswrapper[4727]: E1121 20:07:22.499238 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.601503 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.601580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.601599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.601624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.601641 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:22Z","lastTransitionTime":"2025-11-21T20:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.704249 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.704298 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.704308 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.704322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.704333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:22Z","lastTransitionTime":"2025-11-21T20:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.807384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.807429 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.807440 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.807456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.807469 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:22Z","lastTransitionTime":"2025-11-21T20:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.910732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.910873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.910901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.910930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:22 crc kubenswrapper[4727]: I1121 20:07:22.910988 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:22Z","lastTransitionTime":"2025-11-21T20:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.014250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.014291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.014303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.014320 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.014329 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:23Z","lastTransitionTime":"2025-11-21T20:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.117424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.117471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.117485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.117506 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.117520 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:23Z","lastTransitionTime":"2025-11-21T20:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.220350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.220415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.220431 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.220453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.220467 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:23Z","lastTransitionTime":"2025-11-21T20:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.322941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.323032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.323049 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.323078 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.323104 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:23Z","lastTransitionTime":"2025-11-21T20:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.426317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.426363 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.426374 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.426389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.426402 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:23Z","lastTransitionTime":"2025-11-21T20:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.498643 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:23 crc kubenswrapper[4727]: E1121 20:07:23.498825 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.528534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.528573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.528583 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.528600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.528610 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:23Z","lastTransitionTime":"2025-11-21T20:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.630618 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.630658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.630667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.630682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.630694 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:23Z","lastTransitionTime":"2025-11-21T20:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.732631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.732691 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.732714 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.732738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.732754 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:23Z","lastTransitionTime":"2025-11-21T20:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.836431 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.836533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.836553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.836629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.836653 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:23Z","lastTransitionTime":"2025-11-21T20:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.939691 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.939742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.939751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.939764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:23 crc kubenswrapper[4727]: I1121 20:07:23.939775 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:23Z","lastTransitionTime":"2025-11-21T20:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.043331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.043427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.043452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.043493 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.043525 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:24Z","lastTransitionTime":"2025-11-21T20:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.147029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.147121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.147143 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.147173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.147195 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:24Z","lastTransitionTime":"2025-11-21T20:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.250388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.250451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.250470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.250496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.250514 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:24Z","lastTransitionTime":"2025-11-21T20:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.354328 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.354385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.354404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.354430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.354449 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:24Z","lastTransitionTime":"2025-11-21T20:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.411811 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:24 crc kubenswrapper[4727]: E1121 20:07:24.412169 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:24 crc kubenswrapper[4727]: E1121 20:07:24.412299 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs podName:f8318f96-4402-4567-a432-6cf3897e218d nodeName:}" failed. No retries permitted until 2025-11-21 20:07:40.412272736 +0000 UTC m=+65.598457820 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs") pod "network-metrics-daemon-rs9rv" (UID: "f8318f96-4402-4567-a432-6cf3897e218d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.458119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.458177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.458195 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.458230 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.458248 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:24Z","lastTransitionTime":"2025-11-21T20:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.498655 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.498716 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.498686 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:24 crc kubenswrapper[4727]: E1121 20:07:24.498897 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:24 crc kubenswrapper[4727]: E1121 20:07:24.499018 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:24 crc kubenswrapper[4727]: E1121 20:07:24.499190 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.562397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.562466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.562486 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.562514 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.562538 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:24Z","lastTransitionTime":"2025-11-21T20:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.664792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.665023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.665045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.665127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.665146 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:24Z","lastTransitionTime":"2025-11-21T20:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.768552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.768652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.768671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.768735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.768752 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:24Z","lastTransitionTime":"2025-11-21T20:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.872288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.872343 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.872365 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.872390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.872407 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:24Z","lastTransitionTime":"2025-11-21T20:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.975913 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.975986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.976005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.976029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:24 crc kubenswrapper[4727]: I1121 20:07:24.976047 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:24Z","lastTransitionTime":"2025-11-21T20:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.078989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.079366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.079619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.079783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.079922 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:25Z","lastTransitionTime":"2025-11-21T20:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.182887 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.183007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.183032 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.183065 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.183180 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:25Z","lastTransitionTime":"2025-11-21T20:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.286258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.286309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.286322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.286340 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.286353 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:25Z","lastTransitionTime":"2025-11-21T20:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.389607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.389670 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.389687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.389718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.389736 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:25Z","lastTransitionTime":"2025-11-21T20:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.492761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.492817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.492834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.492859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.492875 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:25Z","lastTransitionTime":"2025-11-21T20:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.498194 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:25 crc kubenswrapper[4727]: E1121 20:07:25.498348 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.518428 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.542002 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.584713 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.595596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.595634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.595642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.595656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.595666 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:25Z","lastTransitionTime":"2025-11-21T20:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.606734 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.630369 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.651048 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.664758 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.676919 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.694374 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.698546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.698576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.698588 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.698604 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.698614 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:25Z","lastTransitionTime":"2025-11-21T20:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.707998 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.725472 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.743342 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.754549 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.765235 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.779210 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.800501 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.800687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.800739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.800749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.800764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.800774 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:25Z","lastTransitionTime":"2025-11-21T20:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.811215 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:25Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.902674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.902714 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.902724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.902740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:25 crc kubenswrapper[4727]: I1121 20:07:25.902752 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:25Z","lastTransitionTime":"2025-11-21T20:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.004738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.004778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.004788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.004802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.004814 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:26Z","lastTransitionTime":"2025-11-21T20:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.107206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.107269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.107292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.107322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.107346 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:26Z","lastTransitionTime":"2025-11-21T20:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.209903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.209988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.210007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.210033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.210052 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:26Z","lastTransitionTime":"2025-11-21T20:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.312269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.312296 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.312305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.312334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.312344 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:26Z","lastTransitionTime":"2025-11-21T20:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.329273 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.329523 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.329542 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.329552 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.329834 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:07:58.329417465 +0000 UTC m=+83.515602509 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.329860 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:58.329852464 +0000 UTC m=+83.516037508 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.329353 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.329890 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.330023 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.330037 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.330045 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.330067 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:58.330060808 +0000 UTC m=+83.516245852 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.329950 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.330113 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.330199 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.330222 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:58.330216852 +0000 UTC m=+83.516401896 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.330323 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.330346 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:07:58.330339924 +0000 UTC m=+83.516524968 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.415404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.415442 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.415454 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.415470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.415483 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:26Z","lastTransitionTime":"2025-11-21T20:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.498425 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.498511 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.498481 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.498648 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.498757 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:26 crc kubenswrapper[4727]: E1121 20:07:26.498827 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.518261 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.518301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.518309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.518326 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.518336 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:26Z","lastTransitionTime":"2025-11-21T20:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.620773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.620805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.620814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.620828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.620838 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:26Z","lastTransitionTime":"2025-11-21T20:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.723487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.723521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.723530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.723542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.723550 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:26Z","lastTransitionTime":"2025-11-21T20:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.825620 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.825711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.825723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.825740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.825751 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:26Z","lastTransitionTime":"2025-11-21T20:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.927603 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.927631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.927639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.927652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:26 crc kubenswrapper[4727]: I1121 20:07:26.927660 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:26Z","lastTransitionTime":"2025-11-21T20:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.030244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.030310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.030336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.030405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.030427 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:27Z","lastTransitionTime":"2025-11-21T20:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.132564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.132633 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.132648 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.132672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.132690 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:27Z","lastTransitionTime":"2025-11-21T20:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.234762 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.234804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.234815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.234831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.234842 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:27Z","lastTransitionTime":"2025-11-21T20:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.337535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.337600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.337617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.337640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.337656 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:27Z","lastTransitionTime":"2025-11-21T20:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.440607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.440662 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.440680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.440702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.440719 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:27Z","lastTransitionTime":"2025-11-21T20:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.498893 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:27 crc kubenswrapper[4727]: E1121 20:07:27.499287 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.542608 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.542807 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.542905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.542988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.543049 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:27Z","lastTransitionTime":"2025-11-21T20:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.645474 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.645757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.645866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.646012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.646150 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:27Z","lastTransitionTime":"2025-11-21T20:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.748624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.748657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.748698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.748711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.748719 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:27Z","lastTransitionTime":"2025-11-21T20:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.850676 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.850733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.850745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.850760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.850770 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:27Z","lastTransitionTime":"2025-11-21T20:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.952915 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.952986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.953002 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.953025 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:27 crc kubenswrapper[4727]: I1121 20:07:27.953040 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:27Z","lastTransitionTime":"2025-11-21T20:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.056073 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.056111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.056122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.056139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.056151 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:28Z","lastTransitionTime":"2025-11-21T20:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.157774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.157859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.157875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.157892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.157902 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:28Z","lastTransitionTime":"2025-11-21T20:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.260267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.260321 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.260337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.260358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.260373 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:28Z","lastTransitionTime":"2025-11-21T20:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.362611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.362647 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.362658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.362675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.362686 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:28Z","lastTransitionTime":"2025-11-21T20:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.464842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.464930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.464950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.465483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.465722 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:28Z","lastTransitionTime":"2025-11-21T20:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.498298 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.498346 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:28 crc kubenswrapper[4727]: E1121 20:07:28.498671 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:28 crc kubenswrapper[4727]: E1121 20:07:28.498871 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.498930 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:28 crc kubenswrapper[4727]: E1121 20:07:28.499165 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.568776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.568813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.568823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.568840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.568849 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:28Z","lastTransitionTime":"2025-11-21T20:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.671112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.671139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.671149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.671165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.671176 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:28Z","lastTransitionTime":"2025-11-21T20:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.773672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.773718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.773727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.773741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.773750 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:28Z","lastTransitionTime":"2025-11-21T20:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.876411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.876490 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.876509 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.876539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.876558 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:28Z","lastTransitionTime":"2025-11-21T20:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.980010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.980057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.980068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.980090 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:28 crc kubenswrapper[4727]: I1121 20:07:28.980101 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:28Z","lastTransitionTime":"2025-11-21T20:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.083275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.083338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.083359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.083387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.083406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:29Z","lastTransitionTime":"2025-11-21T20:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.186902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.187015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.187035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.187056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.187090 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:29Z","lastTransitionTime":"2025-11-21T20:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.290580 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.290665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.290688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.290716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.290736 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:29Z","lastTransitionTime":"2025-11-21T20:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.394167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.394254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.394284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.394316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.394338 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:29Z","lastTransitionTime":"2025-11-21T20:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.497387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.497468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.497487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.497515 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.497534 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:29Z","lastTransitionTime":"2025-11-21T20:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.498343 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:29 crc kubenswrapper[4727]: E1121 20:07:29.498600 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.600355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.600395 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.600403 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.600419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.600429 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:29Z","lastTransitionTime":"2025-11-21T20:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.704724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.704777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.704787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.704808 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.704822 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:29Z","lastTransitionTime":"2025-11-21T20:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.807720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.807769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.807782 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.807801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.807813 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:29Z","lastTransitionTime":"2025-11-21T20:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.820787 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.832796 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.841319 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:29Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.864409 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:29Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.899866 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:29Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.911636 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.911683 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.911701 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.911726 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.911743 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:29Z","lastTransitionTime":"2025-11-21T20:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.924593 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:29Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.946383 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:29Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.966430 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:29Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:29 crc kubenswrapper[4727]: I1121 20:07:29.986167 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:29Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.007528 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:30Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.014870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.014939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.014981 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.015011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.015031 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:30Z","lastTransitionTime":"2025-11-21T20:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.028010 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:30Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.047516 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:30Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.067817 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:30Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.090130 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:30Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.106635 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:30Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.117811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.117863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.117874 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.117893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.117906 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:30Z","lastTransitionTime":"2025-11-21T20:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.122917 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:30Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.148296 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:30Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.166707 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:30Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.179051 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:30Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.221838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.221884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.221895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.221914 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.221926 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:30Z","lastTransitionTime":"2025-11-21T20:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.324677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.324762 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.324836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.324854 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.324865 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:30Z","lastTransitionTime":"2025-11-21T20:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.427307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.427422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.427443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.427463 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.427477 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:30Z","lastTransitionTime":"2025-11-21T20:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.499043 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.499123 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:30 crc kubenswrapper[4727]: E1121 20:07:30.499213 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.499329 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:30 crc kubenswrapper[4727]: E1121 20:07:30.499449 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:30 crc kubenswrapper[4727]: E1121 20:07:30.499594 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.530410 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.530469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.530482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.530499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.530511 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:30Z","lastTransitionTime":"2025-11-21T20:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.633176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.633470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.633480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.633496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.633504 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:30Z","lastTransitionTime":"2025-11-21T20:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.735142 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.735180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.735190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.735205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.735218 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:30Z","lastTransitionTime":"2025-11-21T20:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.837424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.837479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.837496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.837517 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.837535 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:30Z","lastTransitionTime":"2025-11-21T20:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.940056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.940284 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.940545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.940614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:30 crc kubenswrapper[4727]: I1121 20:07:30.940680 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:30Z","lastTransitionTime":"2025-11-21T20:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.043192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.043259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.043285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.043314 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.043341 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.146131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.146588 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.146700 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.146809 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.146898 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.250234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.250280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.250291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.250306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.250321 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.353138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.353185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.353197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.353217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.353230 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.455242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.455277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.455288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.455305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.455315 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.498429 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:31 crc kubenswrapper[4727]: E1121 20:07:31.498546 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.499279 4727 scope.go:117] "RemoveContainer" containerID="6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04" Nov 21 20:07:31 crc kubenswrapper[4727]: E1121 20:07:31.499455 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.557235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.557277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.557289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.557305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.557316 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.582919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.582954 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.583001 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.583017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.583026 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: E1121 20:07:31.599628 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:31Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.604698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.604732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.604742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.604758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.604769 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: E1121 20:07:31.615157 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:31Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.618738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.618782 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.618797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.618820 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.618833 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: E1121 20:07:31.636819 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:31Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.640800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.640847 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.640864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.640890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.640907 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: E1121 20:07:31.659562 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:31Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.663857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.663918 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.663939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.663997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.664025 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: E1121 20:07:31.680216 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:31Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:31 crc kubenswrapper[4727]: E1121 20:07:31.680436 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.682076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.682135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.682158 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.682185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.682204 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.784946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.785084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.785104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.785130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.785149 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.888154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.888194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.888203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.888224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.888234 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.990849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.990899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.990917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.990942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:31 crc kubenswrapper[4727]: I1121 20:07:31.990995 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:31Z","lastTransitionTime":"2025-11-21T20:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.093672 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.094072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.094273 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.094453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.094597 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:32Z","lastTransitionTime":"2025-11-21T20:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.197802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.197841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.197851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.197867 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.197877 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:32Z","lastTransitionTime":"2025-11-21T20:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.301339 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.301634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.301762 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.301861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.301947 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:32Z","lastTransitionTime":"2025-11-21T20:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.404937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.405038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.405049 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.405062 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.405073 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:32Z","lastTransitionTime":"2025-11-21T20:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.498006 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.498062 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:32 crc kubenswrapper[4727]: E1121 20:07:32.498118 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.498123 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:32 crc kubenswrapper[4727]: E1121 20:07:32.498205 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:32 crc kubenswrapper[4727]: E1121 20:07:32.498459 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.507582 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.507613 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.507621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.507634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.507645 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:32Z","lastTransitionTime":"2025-11-21T20:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.609692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.609733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.609742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.609760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.609770 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:32Z","lastTransitionTime":"2025-11-21T20:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.712915 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.712982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.712992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.713008 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.713020 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:32Z","lastTransitionTime":"2025-11-21T20:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.814911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.815006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.815019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.815036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.815049 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:32Z","lastTransitionTime":"2025-11-21T20:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.917411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.917462 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.917470 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.917483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:32 crc kubenswrapper[4727]: I1121 20:07:32.917491 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:32Z","lastTransitionTime":"2025-11-21T20:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.019909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.019940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.019949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.019992 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.020004 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:33Z","lastTransitionTime":"2025-11-21T20:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.122209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.122298 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.122311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.122325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.122335 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:33Z","lastTransitionTime":"2025-11-21T20:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.125614 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.141098 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.154154 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.163664 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.179780 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.197760 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.208742 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.223598 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.224766 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.224810 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.224822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.224853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.224864 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:33Z","lastTransitionTime":"2025-11-21T20:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.245150 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.272988 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.287787 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.303844 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.316038 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.327148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.327179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.327187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.327219 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.327228 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:33Z","lastTransitionTime":"2025-11-21T20:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.327493 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.338793 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.351274 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.363095 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b27c06-2d01-404f-9fb9-a72935b30494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.376996 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.385632 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:33Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.430185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.430229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.430244 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.430263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.430274 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:33Z","lastTransitionTime":"2025-11-21T20:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.498937 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:33 crc kubenswrapper[4727]: E1121 20:07:33.499087 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.532236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.532289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.532303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.532321 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.532333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:33Z","lastTransitionTime":"2025-11-21T20:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.634256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.634291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.634305 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.634322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.634332 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:33Z","lastTransitionTime":"2025-11-21T20:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.736783 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.736817 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.736826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.736838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.736848 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:33Z","lastTransitionTime":"2025-11-21T20:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.838682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.838752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.838764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.838776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.838785 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:33Z","lastTransitionTime":"2025-11-21T20:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.941072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.941107 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.941118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.941135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:33 crc kubenswrapper[4727]: I1121 20:07:33.941144 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:33Z","lastTransitionTime":"2025-11-21T20:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.043401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.043457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.043475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.043500 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.043516 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:34Z","lastTransitionTime":"2025-11-21T20:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.146261 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.146309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.146325 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.146346 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.146361 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:34Z","lastTransitionTime":"2025-11-21T20:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.249702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.249767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.249788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.249811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.249827 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:34Z","lastTransitionTime":"2025-11-21T20:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.352829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.352887 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.352906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.352931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.352982 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:34Z","lastTransitionTime":"2025-11-21T20:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.455589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.455654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.455674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.455703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.455721 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:34Z","lastTransitionTime":"2025-11-21T20:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.498513 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.498565 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.498565 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:34 crc kubenswrapper[4727]: E1121 20:07:34.498665 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:34 crc kubenswrapper[4727]: E1121 20:07:34.498869 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:34 crc kubenswrapper[4727]: E1121 20:07:34.499028 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.558570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.558617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.558632 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.558653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.558666 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:34Z","lastTransitionTime":"2025-11-21T20:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.661342 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.661383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.661393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.661408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.661418 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:34Z","lastTransitionTime":"2025-11-21T20:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.764726 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.764811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.764833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.764863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.764887 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:34Z","lastTransitionTime":"2025-11-21T20:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.868144 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.868226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.868249 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.868272 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.868293 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:34Z","lastTransitionTime":"2025-11-21T20:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.970623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.970663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.970675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.970692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:34 crc kubenswrapper[4727]: I1121 20:07:34.970704 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:34Z","lastTransitionTime":"2025-11-21T20:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.073531 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.073584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.073602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.073626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.073643 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:35Z","lastTransitionTime":"2025-11-21T20:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.176482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.176534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.176549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.176568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.176580 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:35Z","lastTransitionTime":"2025-11-21T20:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.279936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.280001 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.280013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.280031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.280043 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:35Z","lastTransitionTime":"2025-11-21T20:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.382300 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.382376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.382408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.382427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.382440 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:35Z","lastTransitionTime":"2025-11-21T20:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.484767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.484842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.484855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.484874 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.485255 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:35Z","lastTransitionTime":"2025-11-21T20:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.498608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:35 crc kubenswrapper[4727]: E1121 20:07:35.498759 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.510771 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.531735 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.542900 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.555657 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.570904 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.584979 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.590842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.590893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.590904 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.590920 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.590932 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:35Z","lastTransitionTime":"2025-11-21T20:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.592538 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.615472 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.628781 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.641357 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.652660 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.669257 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.690802 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.693557 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.693600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.693617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.693639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.693655 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:35Z","lastTransitionTime":"2025-11-21T20:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.707322 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.720604 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.733716 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.745031 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b27c06-2d01-404f-9fb9-a72935b30494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.758001 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:35Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.796529 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.796818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.796921 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.797076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.797211 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:35Z","lastTransitionTime":"2025-11-21T20:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.900133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.900173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.900183 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.900197 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:35 crc kubenswrapper[4727]: I1121 20:07:35.900208 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:35Z","lastTransitionTime":"2025-11-21T20:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.002555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.002584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.002594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.002616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.002626 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:36Z","lastTransitionTime":"2025-11-21T20:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.124911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.124946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.124971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.124986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.124996 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:36Z","lastTransitionTime":"2025-11-21T20:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.227328 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.227361 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.227372 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.227388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.227400 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:36Z","lastTransitionTime":"2025-11-21T20:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.330189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.330264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.330304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.330344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.330386 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:36Z","lastTransitionTime":"2025-11-21T20:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.432835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.432887 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.432903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.432925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.432940 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:36Z","lastTransitionTime":"2025-11-21T20:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.498392 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.498529 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:36 crc kubenswrapper[4727]: E1121 20:07:36.498651 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.498716 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:36 crc kubenswrapper[4727]: E1121 20:07:36.498820 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:36 crc kubenswrapper[4727]: E1121 20:07:36.498908 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.534557 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.534591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.534599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.534612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.534622 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:36Z","lastTransitionTime":"2025-11-21T20:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.636718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.636751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.636761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.636776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.636788 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:36Z","lastTransitionTime":"2025-11-21T20:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.743262 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.743313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.743323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.743339 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.743351 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:36Z","lastTransitionTime":"2025-11-21T20:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.846646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.846692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.846708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.846728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.846744 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:36Z","lastTransitionTime":"2025-11-21T20:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.950085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.950168 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.950187 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.950214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:36 crc kubenswrapper[4727]: I1121 20:07:36.950231 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:36Z","lastTransitionTime":"2025-11-21T20:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.053506 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.053589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.053602 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.053620 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.053633 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:37Z","lastTransitionTime":"2025-11-21T20:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.156430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.156538 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.156563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.156598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.156622 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:37Z","lastTransitionTime":"2025-11-21T20:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.259308 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.259368 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.259388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.259414 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.259431 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:37Z","lastTransitionTime":"2025-11-21T20:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.362037 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.362119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.362141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.362176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.362196 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:37Z","lastTransitionTime":"2025-11-21T20:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.467414 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.467477 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.467489 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.467513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.467525 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:37Z","lastTransitionTime":"2025-11-21T20:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.498115 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:37 crc kubenswrapper[4727]: E1121 20:07:37.498273 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.571519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.571589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.571609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.571638 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.571663 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:37Z","lastTransitionTime":"2025-11-21T20:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.674263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.674333 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.674356 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.674386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.674408 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:37Z","lastTransitionTime":"2025-11-21T20:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.776932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.776985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.776994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.777007 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.777016 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:37Z","lastTransitionTime":"2025-11-21T20:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.878883 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.878940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.878971 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.878993 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.879008 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:37Z","lastTransitionTime":"2025-11-21T20:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.981778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.981811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.981823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.981838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:37 crc kubenswrapper[4727]: I1121 20:07:37.981850 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:37Z","lastTransitionTime":"2025-11-21T20:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.085211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.085277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.085297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.085322 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.085340 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:38Z","lastTransitionTime":"2025-11-21T20:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.189255 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.189338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.189360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.189386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.189408 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:38Z","lastTransitionTime":"2025-11-21T20:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.292998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.293048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.293063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.293082 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.293095 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:38Z","lastTransitionTime":"2025-11-21T20:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.396859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.396995 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.397020 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.397044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.397062 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:38Z","lastTransitionTime":"2025-11-21T20:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.498141 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.498197 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:38 crc kubenswrapper[4727]: E1121 20:07:38.498275 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.498288 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:38 crc kubenswrapper[4727]: E1121 20:07:38.498442 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:38 crc kubenswrapper[4727]: E1121 20:07:38.498668 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.500296 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.500380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.500399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.500424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.500445 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:38Z","lastTransitionTime":"2025-11-21T20:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.604076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.604146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.604165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.604198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.604217 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:38Z","lastTransitionTime":"2025-11-21T20:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.707649 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.707715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.707738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.707763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.707780 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:38Z","lastTransitionTime":"2025-11-21T20:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.810550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.810642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.810664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.810695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.810719 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:38Z","lastTransitionTime":"2025-11-21T20:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.913776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.913838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.913856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.913881 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:38 crc kubenswrapper[4727]: I1121 20:07:38.913903 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:38Z","lastTransitionTime":"2025-11-21T20:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.016435 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.016468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.016478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.016490 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.016500 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:39Z","lastTransitionTime":"2025-11-21T20:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.118553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.118600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.118612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.118628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.118641 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:39Z","lastTransitionTime":"2025-11-21T20:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.221310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.221343 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.221355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.221370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.221382 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:39Z","lastTransitionTime":"2025-11-21T20:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.323753 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.323795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.323806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.323823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.323837 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:39Z","lastTransitionTime":"2025-11-21T20:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.430859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.430909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.430925 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.430946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.430992 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:39Z","lastTransitionTime":"2025-11-21T20:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.498784 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:39 crc kubenswrapper[4727]: E1121 20:07:39.498929 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.533005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.533057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.533069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.533087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.533099 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:39Z","lastTransitionTime":"2025-11-21T20:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.636226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.636292 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.636337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.636364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.636382 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:39Z","lastTransitionTime":"2025-11-21T20:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.739829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.739852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.739860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.739873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.739882 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:39Z","lastTransitionTime":"2025-11-21T20:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.842628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.842666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.842674 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.842690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.842699 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:39Z","lastTransitionTime":"2025-11-21T20:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.945572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.945625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.945637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.945657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:39 crc kubenswrapper[4727]: I1121 20:07:39.945670 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:39Z","lastTransitionTime":"2025-11-21T20:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.047861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.047895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.047905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.047919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.047931 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:40Z","lastTransitionTime":"2025-11-21T20:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.151121 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.151165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.151179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.151198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.151212 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:40Z","lastTransitionTime":"2025-11-21T20:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.254845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.254887 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.254899 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.254918 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.254930 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:40Z","lastTransitionTime":"2025-11-21T20:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.356916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.356973 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.356987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.357008 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.357020 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:40Z","lastTransitionTime":"2025-11-21T20:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.459772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.459812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.459820 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.459836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.459847 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:40Z","lastTransitionTime":"2025-11-21T20:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.482305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:40 crc kubenswrapper[4727]: E1121 20:07:40.482426 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:40 crc kubenswrapper[4727]: E1121 20:07:40.482476 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs podName:f8318f96-4402-4567-a432-6cf3897e218d nodeName:}" failed. No retries permitted until 2025-11-21 20:08:12.482462819 +0000 UTC m=+97.668647863 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs") pod "network-metrics-daemon-rs9rv" (UID: "f8318f96-4402-4567-a432-6cf3897e218d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.498304 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.498425 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:40 crc kubenswrapper[4727]: E1121 20:07:40.498526 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.498415 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:40 crc kubenswrapper[4727]: E1121 20:07:40.498643 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:40 crc kubenswrapper[4727]: E1121 20:07:40.498683 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.562102 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.562130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.562149 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.562164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.562172 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:40Z","lastTransitionTime":"2025-11-21T20:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.664572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.664629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.664641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.664658 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.664672 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:40Z","lastTransitionTime":"2025-11-21T20:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.766611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.766671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.766682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.766699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.766711 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:40Z","lastTransitionTime":"2025-11-21T20:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.869974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.870011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.870023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.870040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.870052 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:40Z","lastTransitionTime":"2025-11-21T20:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.971985 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.972027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.972042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.972057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:40 crc kubenswrapper[4727]: I1121 20:07:40.972069 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:40Z","lastTransitionTime":"2025-11-21T20:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.073998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.074033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.074041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.074056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.074064 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:41Z","lastTransitionTime":"2025-11-21T20:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.176332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.176373 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.176385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.176402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.176414 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:41Z","lastTransitionTime":"2025-11-21T20:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.278469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.278515 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.278528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.278549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.278562 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:41Z","lastTransitionTime":"2025-11-21T20:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.380848 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.380906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.380933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.380950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.380973 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:41Z","lastTransitionTime":"2025-11-21T20:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.483161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.483201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.483211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.483226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.483237 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:41Z","lastTransitionTime":"2025-11-21T20:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.498721 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:41 crc kubenswrapper[4727]: E1121 20:07:41.498838 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.585175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.585229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.585240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.585258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.585270 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:41Z","lastTransitionTime":"2025-11-21T20:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.687675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.687717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.687725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.687738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.687751 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:41Z","lastTransitionTime":"2025-11-21T20:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.790235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.790540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.790549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.790564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.790573 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:41Z","lastTransitionTime":"2025-11-21T20:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.892459 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.892496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.892504 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.892521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.892530 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:41Z","lastTransitionTime":"2025-11-21T20:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.992341 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.992401 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.992441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.992478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:41 crc kubenswrapper[4727]: I1121 20:07:41.992502 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:41Z","lastTransitionTime":"2025-11-21T20:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: E1121 20:07:42.005736 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.010478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.010512 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.010521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.010536 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.010546 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: E1121 20:07:42.027407 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.030612 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.030657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.030667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.030684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.030694 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: E1121 20:07:42.040587 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.044133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.044169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.044178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.044192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.044203 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: E1121 20:07:42.055530 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.059199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.059241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.059251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.059267 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.059279 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: E1121 20:07:42.069618 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: E1121 20:07:42.069768 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.071191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.071229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.071241 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.071261 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.071277 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.173384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.173426 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.173435 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.173461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.173473 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.275582 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.275624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.275636 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.275656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.275669 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.378476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.378518 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.378526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.378558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.378569 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.481074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.481114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.481140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.481156 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.481165 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.498758 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.498778 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.498838 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:42 crc kubenswrapper[4727]: E1121 20:07:42.498879 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:42 crc kubenswrapper[4727]: E1121 20:07:42.499013 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:42 crc kubenswrapper[4727]: E1121 20:07:42.499127 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.584056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.584093 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.584105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.584119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.584130 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.686666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.686708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.686720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.686738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.686749 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.788611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.788654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.788664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.788682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.788694 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.833207 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7rvdc_07dba644-eb6f-45c3-b373-7a1610c569aa/kube-multus/0.log" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.833269 4727 generic.go:334] "Generic (PLEG): container finished" podID="07dba644-eb6f-45c3-b373-7a1610c569aa" containerID="7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006" exitCode=1 Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.833309 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7rvdc" event={"ID":"07dba644-eb6f-45c3-b373-7a1610c569aa","Type":"ContainerDied","Data":"7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.833755 4727 scope.go:117] "RemoveContainer" containerID="7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.845946 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.863645 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.875179 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.884665 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.892910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.893010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.893059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.893077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.893090 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.900519 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.919511 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.930890 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.942411 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.955362 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"2025-11-21T20:06:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb\\\\n2025-11-21T20:06:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb to /host/opt/cni/bin/\\\\n2025-11-21T20:06:57Z [verbose] multus-daemon started\\\\n2025-11-21T20:06:57Z [verbose] Readiness Indicator file check\\\\n2025-11-21T20:07:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.971525 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.988717 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.995336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.995369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.995377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.995389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.995398 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:42Z","lastTransitionTime":"2025-11-21T20:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:42 crc kubenswrapper[4727]: I1121 20:07:42.999582 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:42Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.010612 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.020528 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.029983 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.040282 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b27c06-2d01-404f-9fb9-a72935b30494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.050763 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.059204 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.098407 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.098462 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.098474 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.098493 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.098506 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:43Z","lastTransitionTime":"2025-11-21T20:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.200421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.200452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.200460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.200473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.200482 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:43Z","lastTransitionTime":"2025-11-21T20:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.302423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.302467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.302476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.302489 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.302501 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:43Z","lastTransitionTime":"2025-11-21T20:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.405518 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.405554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.405563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.405576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.405585 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:43Z","lastTransitionTime":"2025-11-21T20:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.498591 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:43 crc kubenswrapper[4727]: E1121 20:07:43.498767 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.508173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.508303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.508403 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.508501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.508593 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:43Z","lastTransitionTime":"2025-11-21T20:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.611687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.611743 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.611760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.611784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.611801 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:43Z","lastTransitionTime":"2025-11-21T20:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.714741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.714780 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.714798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.714815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.714825 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:43Z","lastTransitionTime":"2025-11-21T20:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.816822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.816856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.816864 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.816878 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.816887 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:43Z","lastTransitionTime":"2025-11-21T20:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.838284 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7rvdc_07dba644-eb6f-45c3-b373-7a1610c569aa/kube-multus/0.log" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.838433 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7rvdc" event={"ID":"07dba644-eb6f-45c3-b373-7a1610c569aa","Type":"ContainerStarted","Data":"aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb"} Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.851031 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.862571 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.873282 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.883204 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.899169 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.918889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.918923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.918932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.918946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.918971 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:43Z","lastTransitionTime":"2025-11-21T20:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.919472 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.931066 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.942888 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.956052 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"2025-11-21T20:06:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb\\\\n2025-11-21T20:06:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb to /host/opt/cni/bin/\\\\n2025-11-21T20:06:57Z [verbose] multus-daemon started\\\\n2025-11-21T20:06:57Z [verbose] Readiness Indicator file check\\\\n2025-11-21T20:07:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.977081 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:43 crc kubenswrapper[4727]: I1121 20:07:43.989618 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.001054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:43Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.012554 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.021819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.021868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.021883 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.021902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.021916 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:44Z","lastTransitionTime":"2025-11-21T20:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.022949 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.035792 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.048315 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b27c06-2d01-404f-9fb9-a72935b30494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.061178 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.076228 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.124670 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.124715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.124802 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.124822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.124831 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:44Z","lastTransitionTime":"2025-11-21T20:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.226774 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.226818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.226828 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.226843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.226854 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:44Z","lastTransitionTime":"2025-11-21T20:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.329356 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.329400 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.329411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.329428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.329440 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:44Z","lastTransitionTime":"2025-11-21T20:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.431769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.431814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.431824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.431840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.431851 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:44Z","lastTransitionTime":"2025-11-21T20:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.498526 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.498565 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:44 crc kubenswrapper[4727]: E1121 20:07:44.498752 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.498885 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:44 crc kubenswrapper[4727]: E1121 20:07:44.499209 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.499385 4727 scope.go:117] "RemoveContainer" containerID="6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04" Nov 21 20:07:44 crc kubenswrapper[4727]: E1121 20:07:44.499594 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.508634 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.534882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.534928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.534936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.534952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.534975 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:44Z","lastTransitionTime":"2025-11-21T20:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.637222 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.637256 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.637265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.637280 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.637290 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:44Z","lastTransitionTime":"2025-11-21T20:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.740095 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.740139 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.740150 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.740165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.740177 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:44Z","lastTransitionTime":"2025-11-21T20:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.841815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.841861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.841869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.841882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.841890 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:44Z","lastTransitionTime":"2025-11-21T20:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.843443 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/2.log" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.845596 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.846386 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.862468 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.882999 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.900842 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.913743 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.924562 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"2025-11-21T20:06:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb\\\\n2025-11-21T20:06:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb to /host/opt/cni/bin/\\\\n2025-11-21T20:06:57Z [verbose] multus-daemon started\\\\n2025-11-21T20:06:57Z [verbose] Readiness Indicator file check\\\\n2025-11-21T20:07:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.933423 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9b31165-7a06-446a-8c9f-7aa7e3f4720e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://333e986e96f39eed917fea4527a9e2bc0dcdfe7824b75aefc2fdc8e747b49300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.943448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.943496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.943510 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.943529 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.943539 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:44Z","lastTransitionTime":"2025-11-21T20:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.950840 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.964145 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.977466 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:44 crc kubenswrapper[4727]: I1121 20:07:44.988537 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b27c06-2d01-404f-9fb9-a72935b30494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.001315 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:44Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.010459 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.024212 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.046127 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.046166 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.046175 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.046189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.046199 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:45Z","lastTransitionTime":"2025-11-21T20:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.046195 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.058847 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.069381 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.082994 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.094760 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.104990 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.148138 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.148189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.148200 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.148216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.148225 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:45Z","lastTransitionTime":"2025-11-21T20:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.251247 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.251316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.251329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.251343 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.251353 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:45Z","lastTransitionTime":"2025-11-21T20:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.353655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.353690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.353700 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.353715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.353727 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:45Z","lastTransitionTime":"2025-11-21T20:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.455838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.455880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.455889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.455905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.455914 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:45Z","lastTransitionTime":"2025-11-21T20:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.498277 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:45 crc kubenswrapper[4727]: E1121 20:07:45.498415 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.511970 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.522254 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b27c06-2d01-404f-9fb9-a72935b30494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.532858 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.543114 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.553029 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.557723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.557773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.557784 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.557804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.557816 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:45Z","lastTransitionTime":"2025-11-21T20:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.565351 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.577267 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.589153 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.602450 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.618815 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.628096 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.638097 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.650334 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"2025-11-21T20:06:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb\\\\n2025-11-21T20:06:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb to /host/opt/cni/bin/\\\\n2025-11-21T20:06:57Z [verbose] multus-daemon started\\\\n2025-11-21T20:06:57Z [verbose] Readiness Indicator file check\\\\n2025-11-21T20:07:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.659851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.659903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.659917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.659936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.659969 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:45Z","lastTransitionTime":"2025-11-21T20:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.663326 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9b31165-7a06-446a-8c9f-7aa7e3f4720e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://333e986e96f39eed917fea4527a9e2bc0dcdfe7824b75aefc2fdc8e747b49300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.683901 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.697241 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.710036 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.722003 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.735229 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.762772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.762881 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.762903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.762932 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.762951 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:45Z","lastTransitionTime":"2025-11-21T20:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.854204 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/3.log" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.855289 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/2.log" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.866211 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca" exitCode=1 Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.866277 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.866316 4727 scope.go:117] "RemoveContainer" containerID="6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.866937 4727 scope.go:117] "RemoveContainer" containerID="1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca" Nov 21 20:07:45 crc kubenswrapper[4727]: E1121 20:07:45.867115 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.869502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.869535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.869546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.869562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.869572 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:45Z","lastTransitionTime":"2025-11-21T20:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.880476 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.898474 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e073190a14764f06d4369511999e00a23a812b3103ad84d761c95dd836ecf04\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:18Z\\\",\\\"message\\\":\\\"t default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:18Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:18.482383 6367 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1121 20:07:18.482381 6367 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:45Z\\\",\\\"message\\\":\\\"eate admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:45.273202 6710 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.907297 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.916674 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.928539 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.939421 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.947999 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.958975 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.970829 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.974092 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.974152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.974162 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.974177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.974186 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:45Z","lastTransitionTime":"2025-11-21T20:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.983124 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:45 crc kubenswrapper[4727]: I1121 20:07:45.992865 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.004028 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"2025-11-21T20:06:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb\\\\n2025-11-21T20:06:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb to /host/opt/cni/bin/\\\\n2025-11-21T20:06:57Z [verbose] multus-daemon started\\\\n2025-11-21T20:06:57Z [verbose] Readiness Indicator file check\\\\n2025-11-21T20:07:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.015200 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9b31165-7a06-446a-8c9f-7aa7e3f4720e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://333e986e96f39eed917fea4527a9e2bc0dcdfe7824b75aefc2fdc8e747b49300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.035203 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.048904 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.060254 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.071762 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b27c06-2d01-404f-9fb9-a72935b30494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.076304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.076354 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.076366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.076384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.076398 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:46Z","lastTransitionTime":"2025-11-21T20:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.085469 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.095780 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.179005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.179047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.179058 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.179074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.179085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:46Z","lastTransitionTime":"2025-11-21T20:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.281466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.281506 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.281515 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.281530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.281545 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:46Z","lastTransitionTime":"2025-11-21T20:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.383354 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.383419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.383438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.383463 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.383480 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:46Z","lastTransitionTime":"2025-11-21T20:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.486130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.486193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.486220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.486251 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.486278 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:46Z","lastTransitionTime":"2025-11-21T20:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.498522 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.498614 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.498521 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:46 crc kubenswrapper[4727]: E1121 20:07:46.498688 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:46 crc kubenswrapper[4727]: E1121 20:07:46.498896 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:46 crc kubenswrapper[4727]: E1121 20:07:46.499067 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.589132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.589186 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.589198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.589215 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.589226 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:46Z","lastTransitionTime":"2025-11-21T20:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.691855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.691902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.691914 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.691934 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.691951 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:46Z","lastTransitionTime":"2025-11-21T20:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.794609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.794655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.794668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.794694 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.794717 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:46Z","lastTransitionTime":"2025-11-21T20:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.870767 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/3.log" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.874296 4727 scope.go:117] "RemoveContainer" containerID="1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca" Nov 21 20:07:46 crc kubenswrapper[4727]: E1121 20:07:46.874525 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.886181 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.896884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.896928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.896943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.896975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.896988 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:46Z","lastTransitionTime":"2025-11-21T20:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.897657 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b27c06-2d01-404f-9fb9-a72935b30494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.910551 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.923483 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.934090 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.951931 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.964915 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.976233 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.985500 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.999749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.999805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.999816 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.999833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:46 crc kubenswrapper[4727]: I1121 20:07:46.999844 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:46Z","lastTransitionTime":"2025-11-21T20:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.000106 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:46Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.016907 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:45Z\\\",\\\"message\\\":\\\"eate admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:45.273202 6710 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:47Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.028267 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:47Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.038196 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:47Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.053135 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"2025-11-21T20:06:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb\\\\n2025-11-21T20:06:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb to /host/opt/cni/bin/\\\\n2025-11-21T20:06:57Z [verbose] multus-daemon started\\\\n2025-11-21T20:06:57Z [verbose] Readiness Indicator file check\\\\n2025-11-21T20:07:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:47Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.064011 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9b31165-7a06-446a-8c9f-7aa7e3f4720e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://333e986e96f39eed917fea4527a9e2bc0dcdfe7824b75aefc2fdc8e747b49300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:47Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.085054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:47Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.101906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.101942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.101952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.101987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.102001 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:47Z","lastTransitionTime":"2025-11-21T20:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.105988 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:47Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.121656 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:47Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.137116 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:47Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.204317 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.204379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.204394 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.204421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.204436 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:47Z","lastTransitionTime":"2025-11-21T20:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.306309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.306363 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.306375 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.306393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.306406 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:47Z","lastTransitionTime":"2025-11-21T20:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.408344 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.408382 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.408390 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.408405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.408415 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:47Z","lastTransitionTime":"2025-11-21T20:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.498766 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:47 crc kubenswrapper[4727]: E1121 20:07:47.498998 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.511008 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.511055 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.511069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.511088 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.511101 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:47Z","lastTransitionTime":"2025-11-21T20:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.613947 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.613996 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.614005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.614019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.614028 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:47Z","lastTransitionTime":"2025-11-21T20:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.717754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.717825 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.717844 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.717876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.717900 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:47Z","lastTransitionTime":"2025-11-21T20:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.820216 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.820268 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.820279 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.820297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.820310 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:47Z","lastTransitionTime":"2025-11-21T20:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.922384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.922423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.922432 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.922447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:47 crc kubenswrapper[4727]: I1121 20:07:47.922457 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:47Z","lastTransitionTime":"2025-11-21T20:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.024805 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.024857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.024869 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.024889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.024902 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:48Z","lastTransitionTime":"2025-11-21T20:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.127822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.127890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.127912 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.127941 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.127998 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:48Z","lastTransitionTime":"2025-11-21T20:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.230514 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.230558 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.230569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.230584 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.230596 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:48Z","lastTransitionTime":"2025-11-21T20:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.333003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.333050 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.333061 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.333077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.333089 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:48Z","lastTransitionTime":"2025-11-21T20:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.435019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.435047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.435055 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.435068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.435077 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:48Z","lastTransitionTime":"2025-11-21T20:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.498659 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.498656 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.498674 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:48 crc kubenswrapper[4727]: E1121 20:07:48.498791 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:48 crc kubenswrapper[4727]: E1121 20:07:48.499103 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:48 crc kubenswrapper[4727]: E1121 20:07:48.499136 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.537776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.537818 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.537827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.537840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.537850 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:48Z","lastTransitionTime":"2025-11-21T20:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.640724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.640775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.640787 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.640810 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.640824 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:48Z","lastTransitionTime":"2025-11-21T20:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.744542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.744626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.744654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.744689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.744708 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:48Z","lastTransitionTime":"2025-11-21T20:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.847889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.848067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.848101 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.848140 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.848164 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:48Z","lastTransitionTime":"2025-11-21T20:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.951436 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.951501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.951518 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.951544 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:48 crc kubenswrapper[4727]: I1121 20:07:48.951565 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:48Z","lastTransitionTime":"2025-11-21T20:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.054913 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.055042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.055069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.055098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.055116 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:49Z","lastTransitionTime":"2025-11-21T20:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.157209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.157235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.164509 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.164549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.164563 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:49Z","lastTransitionTime":"2025-11-21T20:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.271709 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.271792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.271803 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.271819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.272166 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:49Z","lastTransitionTime":"2025-11-21T20:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.374327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.374379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.374391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.374405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.374416 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:49Z","lastTransitionTime":"2025-11-21T20:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.477117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.477348 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.477379 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.477404 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.477419 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:49Z","lastTransitionTime":"2025-11-21T20:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.503380 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:49 crc kubenswrapper[4727]: E1121 20:07:49.503564 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.580178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.580237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.580262 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.580290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.580310 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:49Z","lastTransitionTime":"2025-11-21T20:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.683567 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.683677 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.683742 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.683769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.683787 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:49Z","lastTransitionTime":"2025-11-21T20:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.786153 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.786193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.786206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.786225 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.786239 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:49Z","lastTransitionTime":"2025-11-21T20:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.888696 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.888760 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.888771 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.888786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.888799 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:49Z","lastTransitionTime":"2025-11-21T20:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.991786 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.991826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.991836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.991851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:49 crc kubenswrapper[4727]: I1121 20:07:49.991862 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:49Z","lastTransitionTime":"2025-11-21T20:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.094646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.094692 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.094703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.094722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.094733 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:50Z","lastTransitionTime":"2025-11-21T20:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.198219 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.198294 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.198321 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.198357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.198383 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:50Z","lastTransitionTime":"2025-11-21T20:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.301303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.301369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.301393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.301424 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.301445 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:50Z","lastTransitionTime":"2025-11-21T20:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.403927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.404003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.404015 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.404034 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.404048 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:50Z","lastTransitionTime":"2025-11-21T20:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.498159 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.498210 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.498308 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:50 crc kubenswrapper[4727]: E1121 20:07:50.498540 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:50 crc kubenswrapper[4727]: E1121 20:07:50.498638 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:50 crc kubenswrapper[4727]: E1121 20:07:50.498798 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.506257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.506311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.506330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.506355 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.506373 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:50Z","lastTransitionTime":"2025-11-21T20:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.609335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.609384 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.609399 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.609420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.609432 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:50Z","lastTransitionTime":"2025-11-21T20:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.711287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.711329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.711338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.711354 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.711363 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:50Z","lastTransitionTime":"2025-11-21T20:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.815151 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.815377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.815421 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.815457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.815506 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:50Z","lastTransitionTime":"2025-11-21T20:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.918640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.918718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.918731 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.918751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:50 crc kubenswrapper[4727]: I1121 20:07:50.918764 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:50Z","lastTransitionTime":"2025-11-21T20:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.020997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.021063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.021085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.021112 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.021130 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:51Z","lastTransitionTime":"2025-11-21T20:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.124218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.124293 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.124320 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.124352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.124374 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:51Z","lastTransitionTime":"2025-11-21T20:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.227666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.227729 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.227748 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.227775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.227795 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:51Z","lastTransitionTime":"2025-11-21T20:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.331130 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.331179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.331189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.331203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.331213 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:51Z","lastTransitionTime":"2025-11-21T20:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.434581 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.434639 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.434656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.434716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.434736 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:51Z","lastTransitionTime":"2025-11-21T20:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.499169 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:51 crc kubenswrapper[4727]: E1121 20:07:51.499322 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.537527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.537562 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.537571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.537585 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.537594 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:51Z","lastTransitionTime":"2025-11-21T20:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.640324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.640446 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.640468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.640491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.640508 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:51Z","lastTransitionTime":"2025-11-21T20:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.743756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.743832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.743857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.743891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.743914 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:51Z","lastTransitionTime":"2025-11-21T20:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.846824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.846882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.846900 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.846923 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.846939 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:51Z","lastTransitionTime":"2025-11-21T20:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.949485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.949521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.949529 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.949573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:51 crc kubenswrapper[4727]: I1121 20:07:51.949583 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:51Z","lastTransitionTime":"2025-11-21T20:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.052902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.052933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.052942 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.052979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.052988 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.155499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.155552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.155571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.155592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.155605 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.258838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.258891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.258906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.258926 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.258940 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.291924 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.291982 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.291991 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.292006 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.292016 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: E1121 20:07:52.304567 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:52Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.308573 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.308616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.308628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.308644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.308655 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: E1121 20:07:52.322346 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:52Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.326164 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.326234 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.326414 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.326488 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.326566 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: E1121 20:07:52.340189 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:52Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.344855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.344918 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.344935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.345422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.345486 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: E1121 20:07:52.357946 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:52Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.362136 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.362170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.362177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.362194 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.362203 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: E1121 20:07:52.372630 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:52Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:52 crc kubenswrapper[4727]: E1121 20:07:52.372784 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.374119 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.374154 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.374167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.374185 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.374199 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.475998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.476038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.476052 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.476068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.476080 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.498866 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.498873 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.498873 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:52 crc kubenswrapper[4727]: E1121 20:07:52.499143 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:52 crc kubenswrapper[4727]: E1121 20:07:52.499175 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:52 crc kubenswrapper[4727]: E1121 20:07:52.499092 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.578800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.578865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.578882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.578907 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.578925 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.681490 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.681530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.681543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.681588 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.681600 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.784770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.784811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.784821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.784836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.784846 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.887655 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.887702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.887716 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.887732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.887746 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.990889 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.990998 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.991019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.991043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:52 crc kubenswrapper[4727]: I1121 20:07:52.991060 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:52Z","lastTransitionTime":"2025-11-21T20:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.092842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.092882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.092894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.092910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.092922 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:53Z","lastTransitionTime":"2025-11-21T20:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.195743 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.195789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.195801 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.195822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.195836 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:53Z","lastTransitionTime":"2025-11-21T20:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.297772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.297829 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.297839 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.297855 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.297865 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:53Z","lastTransitionTime":"2025-11-21T20:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.400749 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.400800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.400814 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.400830 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.400840 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:53Z","lastTransitionTime":"2025-11-21T20:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.498480 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:53 crc kubenswrapper[4727]: E1121 20:07:53.498666 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.502331 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.502380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.502397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.502420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.502438 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:53Z","lastTransitionTime":"2025-11-21T20:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.605876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.605953 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.606035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.606059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.606075 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:53Z","lastTransitionTime":"2025-11-21T20:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.707975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.708005 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.708013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.708028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.708038 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:53Z","lastTransitionTime":"2025-11-21T20:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.810673 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.810711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.810721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.810734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.810743 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:53Z","lastTransitionTime":"2025-11-21T20:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.912622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.912666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.912675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.912688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:53 crc kubenswrapper[4727]: I1121 20:07:53.912698 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:53Z","lastTransitionTime":"2025-11-21T20:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.014708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.014761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.014772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.014788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.014800 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:54Z","lastTransitionTime":"2025-11-21T20:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.117289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.117327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.117336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.117351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.117359 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:54Z","lastTransitionTime":"2025-11-21T20:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.219478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.219518 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.219530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.219545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.219557 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:54Z","lastTransitionTime":"2025-11-21T20:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.321675 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.321711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.321720 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.321734 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.321743 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:54Z","lastTransitionTime":"2025-11-21T20:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.424316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.424359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.424369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.424385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.424397 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:54Z","lastTransitionTime":"2025-11-21T20:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.498846 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:54 crc kubenswrapper[4727]: E1121 20:07:54.499006 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.499244 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.499252 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:54 crc kubenswrapper[4727]: E1121 20:07:54.499421 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:54 crc kubenswrapper[4727]: E1121 20:07:54.499529 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.527314 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.527357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.527370 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.527387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.527398 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:54Z","lastTransitionTime":"2025-11-21T20:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.629892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.630011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.630055 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.630079 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.630098 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:54Z","lastTransitionTime":"2025-11-21T20:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.732182 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.732218 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.732229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.732243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.732252 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:54Z","lastTransitionTime":"2025-11-21T20:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.835438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.835503 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.835522 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.835556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.835567 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:54Z","lastTransitionTime":"2025-11-21T20:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.945253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.945288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.945299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.945314 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:54 crc kubenswrapper[4727]: I1121 20:07:54.945325 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:54Z","lastTransitionTime":"2025-11-21T20:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.048302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.048369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.048391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.048420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.048443 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:55Z","lastTransitionTime":"2025-11-21T20:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.151468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.151518 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.151532 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.151549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.151562 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:55Z","lastTransitionTime":"2025-11-21T20:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.255013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.255087 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.255105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.255129 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.255148 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:55Z","lastTransitionTime":"2025-11-21T20:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.357416 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.357460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.357475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.357496 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.357511 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:55Z","lastTransitionTime":"2025-11-21T20:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.460310 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.460351 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.460363 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.460378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.460387 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:55Z","lastTransitionTime":"2025-11-21T20:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.498584 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:55 crc kubenswrapper[4727]: E1121 20:07:55.498774 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.521673 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.537483 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b27c06-2d01-404f-9fb9-a72935b30494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.551448 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.562589 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.562628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.562637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.562652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.562662 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:55Z","lastTransitionTime":"2025-11-21T20:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.563369 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.582453 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:45Z\\\",\\\"message\\\":\\\"eate admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:45.273202 6710 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.599193 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.615720 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.632495 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.644830 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.655439 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.665717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.665775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.665790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.665812 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.665829 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:55Z","lastTransitionTime":"2025-11-21T20:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.669234 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.680176 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.691054 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.703855 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.717184 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"2025-11-21T20:06:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb\\\\n2025-11-21T20:06:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb to /host/opt/cni/bin/\\\\n2025-11-21T20:06:57Z [verbose] multus-daemon started\\\\n2025-11-21T20:06:57Z [verbose] Readiness Indicator file check\\\\n2025-11-21T20:07:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.727688 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9b31165-7a06-446a-8c9f-7aa7e3f4720e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://333e986e96f39eed917fea4527a9e2bc0dcdfe7824b75aefc2fdc8e747b49300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.750134 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.765243 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.767688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.767714 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.767722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.767736 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.767745 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:55Z","lastTransitionTime":"2025-11-21T20:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.776479 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:55Z is after 2025-08-24T17:21:41Z" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.869747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.869835 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.869847 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.869875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.869888 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:55Z","lastTransitionTime":"2025-11-21T20:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.971918 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.971978 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.971989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.972004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:55 crc kubenswrapper[4727]: I1121 20:07:55.972015 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:55Z","lastTransitionTime":"2025-11-21T20:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.074345 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.074386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.074406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.074422 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.074434 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:56Z","lastTransitionTime":"2025-11-21T20:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.176468 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.176532 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.176549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.176572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.176585 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:56Z","lastTransitionTime":"2025-11-21T20:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.278891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.278938 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.278949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.278979 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.278992 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:56Z","lastTransitionTime":"2025-11-21T20:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.381456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.381491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.381503 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.381517 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.381526 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:56Z","lastTransitionTime":"2025-11-21T20:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.483517 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.483550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.483559 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.483572 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.483580 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:56Z","lastTransitionTime":"2025-11-21T20:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.498158 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.498216 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:56 crc kubenswrapper[4727]: E1121 20:07:56.498245 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.498357 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:56 crc kubenswrapper[4727]: E1121 20:07:56.498408 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:56 crc kubenswrapper[4727]: E1121 20:07:56.498347 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.585547 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.585593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.585604 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.585621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.585633 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:56Z","lastTransitionTime":"2025-11-21T20:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.687944 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.687987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.687997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.688011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.688021 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:56Z","lastTransitionTime":"2025-11-21T20:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.790513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.790551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.790561 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.790576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.790587 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:56Z","lastTransitionTime":"2025-11-21T20:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.892724 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.892778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.892794 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.892819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.892835 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:56Z","lastTransitionTime":"2025-11-21T20:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.996435 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.996501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.996523 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.996550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:56 crc kubenswrapper[4727]: I1121 20:07:56.996571 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:56Z","lastTransitionTime":"2025-11-21T20:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.099259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.099299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.099309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.099323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.099333 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:57Z","lastTransitionTime":"2025-11-21T20:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.205258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.205318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.205336 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.205362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.205380 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:57Z","lastTransitionTime":"2025-11-21T20:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.308851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.308901 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.308914 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.308933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.308949 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:57Z","lastTransitionTime":"2025-11-21T20:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.411479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.411535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.411552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.411576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.411593 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:57Z","lastTransitionTime":"2025-11-21T20:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.498712 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:57 crc kubenswrapper[4727]: E1121 20:07:57.499342 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.499929 4727 scope.go:117] "RemoveContainer" containerID="1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca" Nov 21 20:07:57 crc kubenswrapper[4727]: E1121 20:07:57.500275 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.514157 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.514196 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.514205 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.514220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.514231 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:57Z","lastTransitionTime":"2025-11-21T20:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.617155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.617195 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.617203 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.617219 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.617229 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:57Z","lastTransitionTime":"2025-11-21T20:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.719895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.719944 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.719983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.720001 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.720015 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:57Z","lastTransitionTime":"2025-11-21T20:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.822660 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.822700 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.822708 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.822723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.822733 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:57Z","lastTransitionTime":"2025-11-21T20:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.924797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.924840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.924851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.924866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:57 crc kubenswrapper[4727]: I1121 20:07:57.924877 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:57Z","lastTransitionTime":"2025-11-21T20:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.028362 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.028452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.028471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.028501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.028521 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:58Z","lastTransitionTime":"2025-11-21T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.131337 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.131387 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.131405 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.131429 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.131443 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:58Z","lastTransitionTime":"2025-11-21T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.234084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.234334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.234354 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.234377 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.234393 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:58Z","lastTransitionTime":"2025-11-21T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.337515 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.337568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.337579 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.337598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.337609 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:58Z","lastTransitionTime":"2025-11-21T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.364392 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.364563 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.364663 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.364716 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.364691433 +0000 UTC m=+147.550876477 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.364749 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.364780 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.364778 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.364832 4727 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.364916 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.364896188 +0000 UTC m=+147.551081312 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.364836 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.365027 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.365044 4727 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.365100 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.365082172 +0000 UTC m=+147.551267206 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.364848 4727 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.365330 4727 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.364970 4727 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.365401 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.365379229 +0000 UTC m=+147.551564323 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.365444 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.365426741 +0000 UTC m=+147.551611875 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.441258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.441299 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.441307 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.441324 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.441335 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:58Z","lastTransitionTime":"2025-11-21T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.498475 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.498510 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.498592 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.498630 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.498757 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:07:58 crc kubenswrapper[4727]: E1121 20:07:58.499071 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.544232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.544285 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.544297 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.544315 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.544332 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:58Z","lastTransitionTime":"2025-11-21T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.646950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.647012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.647025 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.647040 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.647053 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:58Z","lastTransitionTime":"2025-11-21T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.749797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.749842 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.749856 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.749874 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.749886 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:58Z","lastTransitionTime":"2025-11-21T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.852687 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.852735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.852746 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.852763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.852775 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:58Z","lastTransitionTime":"2025-11-21T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.955460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.955846 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.956009 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.956191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:58 crc kubenswrapper[4727]: I1121 20:07:58.956340 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:58Z","lastTransitionTime":"2025-11-21T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.060916 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.061004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.061016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.061035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.061049 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:59Z","lastTransitionTime":"2025-11-21T20:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.163329 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.163640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.163827 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.164054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.164216 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:59Z","lastTransitionTime":"2025-11-21T20:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.266647 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.266698 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.266712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.266730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.266743 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:59Z","lastTransitionTime":"2025-11-21T20:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.369173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.369199 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.369207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.369219 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.369229 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:59Z","lastTransitionTime":"2025-11-21T20:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.471721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.471757 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.471765 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.471778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.471846 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:59Z","lastTransitionTime":"2025-11-21T20:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.498586 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:07:59 crc kubenswrapper[4727]: E1121 20:07:59.498695 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.575378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.575419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.575428 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.575442 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.575454 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:59Z","lastTransitionTime":"2025-11-21T20:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.677593 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.677644 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.677659 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.677681 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.677698 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:59Z","lastTransitionTime":"2025-11-21T20:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.780104 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.780212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.780224 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.780238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.780248 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:59Z","lastTransitionTime":"2025-11-21T20:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.884641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.884711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.884737 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.884770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.884794 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:59Z","lastTransitionTime":"2025-11-21T20:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.987359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.987393 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.987402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.987415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:07:59 crc kubenswrapper[4727]: I1121 20:07:59.987424 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:07:59Z","lastTransitionTime":"2025-11-21T20:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.089823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.089894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.089905 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.089922 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.089932 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:00Z","lastTransitionTime":"2025-11-21T20:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.193986 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.194045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.194056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.194070 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.194079 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:00Z","lastTransitionTime":"2025-11-21T20:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.296524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.296555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.296564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.296576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.296585 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:00Z","lastTransitionTime":"2025-11-21T20:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.399645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.399730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.399748 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.399778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.399801 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:00Z","lastTransitionTime":"2025-11-21T20:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.498738 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.498788 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:00 crc kubenswrapper[4727]: E1121 20:08:00.498865 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.498992 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:00 crc kubenswrapper[4727]: E1121 20:08:00.499096 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:00 crc kubenswrapper[4727]: E1121 20:08:00.499226 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.503045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.503084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.503100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.503122 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.503136 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:00Z","lastTransitionTime":"2025-11-21T20:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.604865 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.604931 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.604944 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.605004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.605021 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:00Z","lastTransitionTime":"2025-11-21T20:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.707478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.707809 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.707891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.707976 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.708061 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:00Z","lastTransitionTime":"2025-11-21T20:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.810999 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.811041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.811053 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.811089 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.811101 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:00Z","lastTransitionTime":"2025-11-21T20:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.913227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.913276 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.913288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.913303 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:00 crc kubenswrapper[4727]: I1121 20:08:00.913312 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:00Z","lastTransitionTime":"2025-11-21T20:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.016033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.016073 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.016081 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.016096 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.016107 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:01Z","lastTransitionTime":"2025-11-21T20:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.119068 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.119105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.119116 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.119133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.119144 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:01Z","lastTransitionTime":"2025-11-21T20:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.221450 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.221489 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.221501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.221516 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.221527 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:01Z","lastTransitionTime":"2025-11-21T20:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.324920 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.324983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.324993 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.325012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.325023 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:01Z","lastTransitionTime":"2025-11-21T20:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.427928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.427987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.427997 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.428013 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.428026 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:01Z","lastTransitionTime":"2025-11-21T20:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.498324 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:01 crc kubenswrapper[4727]: E1121 20:08:01.498710 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.530592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.530853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.531147 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.531420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.531649 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:01Z","lastTransitionTime":"2025-11-21T20:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.634578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.634614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.634625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.634641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.634651 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:01Z","lastTransitionTime":"2025-11-21T20:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.737364 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.737598 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.737719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.737844 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.737930 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:01Z","lastTransitionTime":"2025-11-21T20:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.840813 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.840861 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.840873 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.840894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.840907 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:01Z","lastTransitionTime":"2025-11-21T20:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.943945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.944028 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.944045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.944069 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:01 crc kubenswrapper[4727]: I1121 20:08:01.944085 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:01Z","lastTransitionTime":"2025-11-21T20:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.046741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.046777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.046785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.046798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.046807 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.149892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.150141 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.150240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.150302 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.150367 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.253271 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.253340 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.253352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.253368 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.253876 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.356880 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.356939 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.356950 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.356994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.357006 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.459849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.459937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.459988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.460019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.460038 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.498267 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.498384 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:02 crc kubenswrapper[4727]: E1121 20:08:02.498405 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.498516 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:02 crc kubenswrapper[4727]: E1121 20:08:02.498528 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:02 crc kubenswrapper[4727]: E1121 20:08:02.498771 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.562725 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.562767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.562776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.562790 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.562800 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.665832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.665871 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.665882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.665898 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.665911 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.696135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.696180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.696191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.696206 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.696217 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: E1121 20:08:02.712848 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.716553 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.716608 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.716621 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.716641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.716655 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: E1121 20:08:02.728777 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.732519 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.732555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.732564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.732616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.732625 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: E1121 20:08:02.745719 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.749084 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.749111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.749120 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.749135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.749145 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: E1121 20:08:02.761363 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.764763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.764823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.764833 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.764846 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.764856 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: E1121 20:08:02.776148 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:02Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:02 crc kubenswrapper[4727]: E1121 20:08:02.776262 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.777552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.777581 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.777592 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.777607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.777617 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.879974 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.880002 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.880011 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.880024 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.880032 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.982506 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.982549 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.982560 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.982577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:02 crc kubenswrapper[4727]: I1121 20:08:02.982589 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:02Z","lastTransitionTime":"2025-11-21T20:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.084726 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.084764 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.084778 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.084793 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.084801 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:03Z","lastTransitionTime":"2025-11-21T20:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.187265 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.187526 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.187643 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.187728 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.187799 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:03Z","lastTransitionTime":"2025-11-21T20:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.290785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.290853 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.290868 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.290886 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.290899 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:03Z","lastTransitionTime":"2025-11-21T20:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.392527 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.393059 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.393133 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.393226 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.393295 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:03Z","lastTransitionTime":"2025-11-21T20:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.495477 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.495514 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.495556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.495596 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.495608 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:03Z","lastTransitionTime":"2025-11-21T20:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.498691 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:03 crc kubenswrapper[4727]: E1121 20:08:03.499001 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.598730 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.598788 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.598797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.598811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.598821 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:03Z","lastTransitionTime":"2025-11-21T20:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.701398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.701447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.701456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.701473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.701483 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:03Z","lastTransitionTime":"2025-11-21T20:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.804779 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.804892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.804952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.805010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.805023 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:03Z","lastTransitionTime":"2025-11-21T20:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.908248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.908313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.908335 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.908366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:03 crc kubenswrapper[4727]: I1121 20:08:03.908388 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:03Z","lastTransitionTime":"2025-11-21T20:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.011935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.012004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.012017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.012033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.012043 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:04Z","lastTransitionTime":"2025-11-21T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.114706 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.114758 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.114769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.114785 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.114796 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:04Z","lastTransitionTime":"2025-11-21T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.217530 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.217595 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.217608 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.217623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.217635 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:04Z","lastTransitionTime":"2025-11-21T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.320233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.320269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.320277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.320309 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.320320 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:04Z","lastTransitionTime":"2025-11-21T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.422772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.422821 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.422838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.422860 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.422880 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:04Z","lastTransitionTime":"2025-11-21T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.498985 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.498989 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.499082 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:04 crc kubenswrapper[4727]: E1121 20:08:04.499212 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:04 crc kubenswrapper[4727]: E1121 20:08:04.499286 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:04 crc kubenswrapper[4727]: E1121 20:08:04.499338 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.525851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.525883 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.525891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.525903 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.525912 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:04Z","lastTransitionTime":"2025-11-21T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.629177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.629434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.629516 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.629635 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.629700 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:04Z","lastTransitionTime":"2025-11-21T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.733330 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.733380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.733397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.733419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.733440 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:04Z","lastTransitionTime":"2025-11-21T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.838494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.838843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.839034 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.839171 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.839310 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:04Z","lastTransitionTime":"2025-11-21T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.941100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.941190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.941225 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.941258 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:04 crc kubenswrapper[4727]: I1121 20:08:04.941281 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:04Z","lastTransitionTime":"2025-11-21T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.044018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.044066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.044077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.044096 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.044110 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:05Z","lastTransitionTime":"2025-11-21T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.146507 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.146564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.146576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.146645 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.146657 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:05Z","lastTransitionTime":"2025-11-21T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.250018 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.250083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.250103 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.250132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.250153 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:05Z","lastTransitionTime":"2025-11-21T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.353350 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.353396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.353408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.353427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.353439 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:05Z","lastTransitionTime":"2025-11-21T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.456447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.456484 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.456497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.456514 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.456528 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:05Z","lastTransitionTime":"2025-11-21T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.498646 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:05 crc kubenswrapper[4727]: E1121 20:08:05.498868 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.523132 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-74crp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db8a4e70-c074-4f90-aebe-444078f3337f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d8ec54f8edeaa80fb39b26f0cd9c0d4b2456c149245b1c6c1585bf28b08ec13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a2b3626be8b73b407cdeae484730c70f4caa2e66e0a815cb8c047f667254092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4360080de9d788a9ac8bad5b4c679aef3367c8c4e06fd00a28ff4b23406cc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68b04ba74eb7a875559096df2051e2d99535444795021756d6072246fdf21ee1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab4be59a65e9ffdf24c7d6dcaf699e0c86095ab2e7156771e4cca86c3fb264ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://465a44322e7e3ff27479dabcdd1a2ba9102e10c73b75d51d0639b4a8ebeda54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea5b2f1f6cf3b490899d65086f99f448f2bdbfd859fa5b283af3fb108479902e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brm4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-74crp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.550391 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:45Z\\\",\\\"message\\\":\\\"eate admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:07:45Z is after 2025-08-24T17:21:41Z]\\\\nI1121 20:07:45.273202 6710 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rx4mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tfd4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.559847 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.559910 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.559928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.560016 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.560046 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:05Z","lastTransitionTime":"2025-11-21T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.566928 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7444p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a43503-e538-4964-9789-322839cc4c48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d61ca28ee5e9b8e1b56afa45b32036c825b6299dc855cb7ce38dde4649c2371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9w57k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7444p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.585580 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c073d13-aa4f-401c-9684-36980fe94cb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c68719b36c0e9c62dcfa24171d18f9ad5a578884b8ca0da8b5827d6656e890c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec82846684441a347ddef8d336b04c6a776dbe2a5ed7894b813378790e12f7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hssmd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-drnwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.602772 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffd614574e076bedcf841406bd5accc4387a8f2dbc55cbfbb5a1064e69a861d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.620148 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b974a1dc77049e8c632b0acbe8dedf6d3f391811208d0d1084c75e4fdb0908dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.635017 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ccfbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bffa327-bdf2-45d4-93ab-40152e82d177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18e9f8fd8f35e2f7821de2d9f9040dbc5160d8f114d1a54d8e0a0dabab520d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk6qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ccfbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.650107 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3a97e4b6438d588dce44178e2e245d1fc4fa954f6cd02b230ca5c34e3d32294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c90d2d9f22924396549d0d1263f8cb07b41a77f23f7bb48aa9749caabff6147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.663305 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.663420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.663454 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.663465 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.663485 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.663498 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:05Z","lastTransitionTime":"2025-11-21T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.676486 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.690326 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b58aef8f-f223-47d8-a2e6-4a80aeeeec42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db38e6270d9dcedf99c669fa60a075a25d7fd9bb753702551fe5f4610ecaf815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zg9tm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5k2kk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.705157 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7rvdc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07dba644-eb6f-45c3-b373-7a1610c569aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T20:07:42Z\\\",\\\"message\\\":\\\"2025-11-21T20:06:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb\\\\n2025-11-21T20:06:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba82f7f0-16e3-4136-b30a-f02115adb9bb to /host/opt/cni/bin/\\\\n2025-11-21T20:06:57Z [verbose] multus-daemon started\\\\n2025-11-21T20:06:57Z [verbose] Readiness Indicator file check\\\\n2025-11-21T20:07:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7rvdc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.716506 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9b31165-7a06-446a-8c9f-7aa7e3f4720e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://333e986e96f39eed917fea4527a9e2bc0dcdfe7824b75aefc2fdc8e747b49300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://138286bcd4ee791d6db84d43810f66356d58b95eab57e8b04c630269683243f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.736338 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a1f6ee3-6ebf-4422-91ed-105ccb19af8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2d5f0603d9b7d189293750a83d5a3957e98f0390659f8c88d35ae1cae6b18e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde1196a3cb3b4d0acbb70ac333211099ec6f26a975e2de7c07d059983024368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1324bc3b6958e18a7c496fdb513726b921435f35dde2064fcaf9a6fca06eb8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://395c324d3a95c173c7dd0c9206831fe07c220fb6ef12f7b25444a120c5183202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6021787d6acdc5734a82ef2ed53add13275c3461c1ffd38219c264e7fca3fab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://363e33e0f6686c6a97437b91ff8d5be211717d527dac5d1bfad6613849c1a8fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16d2632c704cf56d6b416cdcc6213e3b60836d12cfae28af4bd06f99dff56cd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91dcb1c5c16b4720f4c812b25a87ebd8a0f3a334cc714233b6852cd39322fbf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.749293 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"080ce4fe-3e82-4e15-9340-4fa88a29da04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c37b29dda7ae2cc145e0770879e433aa03e277121be3b7acfe22d41d27801f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cc2199f65adaa48cd1726ea812b332e4f40291e094ccfc215343bdb2f2bd2f1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f3b9bf6cf2709d4559bf6ce2ddeed47f8ebacd7878509b0190ed24b2e55380e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6ac7a73d056ceeae89f31eed787e230193e82bad0a2c762fa35540edfaeee6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd16de1da3c89b579d33c34b4dd24d8ae9df35e17819135165afc45db1297c61\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"file observer\\\\nW1121 20:06:54.183363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1121 20:06:54.183552 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 20:06:54.184374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-552169000/tls.crt::/tmp/serving-cert-552169000/tls.key\\\\\\\"\\\\nI1121 20:06:54.597449 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 20:06:54.600431 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 20:06:54.600448 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 20:06:54.600471 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 20:06:54.600476 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 20:06:54.608444 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 20:06:54.608470 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608475 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 20:06:54.608479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 20:06:54.608482 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 20:06:54.608484 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 20:06:54.608487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 20:06:54.608489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 20:06:54.610022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ceae36169c61dcc484be2881c9562b4024866925aab81778ab9e171413f119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a91fc11f996bd226ec44a8afe092a0939a489f4fc8904c1532586d9e36b69817\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.760566 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39e3ca80-2251-484d-9025-61aed82e16d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://539bc94dd8a3fd65aeae82170fe889bc42c364cfcf52005760432629911dfa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bd3db39641919d1b93ecc771bce800100e7a8b6200d6fc606dbdc50179b8255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08a09f48855f360ffe892f48d4efa156a6774a0be33504e0d664f6a90f1144d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b7c202de2c7a6ec653f0c7f816dcbf64fbf6a5c9a779d29403a29a1003ad201\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.765704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.765735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.765743 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.765756 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.765765 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:05Z","lastTransitionTime":"2025-11-21T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.773095 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b27c06-2d01-404f-9fb9-a72935b30494\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07188732e38e314d39b245e64c5ff2882e3f9622a4f4361b7b1b2730fa5693d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9541542fc2e4eda864eda99995554289d32061e0488f0bf4af04411d3fba3d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9032fe5d80139b864a703f431942616401793036b9960ff2c001b76a3d850b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T20:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://630bac7109e66956147024ba0b3c44511098cd3af401507803e4a31afaedbe22\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T20:06:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T20:06:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:06:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.784020 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T20:06:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.794505 4727 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8318f96-4402-4567-a432-6cf3897e218d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T20:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8bcnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T20:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rs9rv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:05Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.868838 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.868879 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.868891 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.868909 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.868921 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:05Z","lastTransitionTime":"2025-11-21T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.970987 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.971042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.971056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.971077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:05 crc kubenswrapper[4727]: I1121 20:08:05.971090 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:05Z","lastTransitionTime":"2025-11-21T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.073233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.073275 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.073283 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.073296 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.073306 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:06Z","lastTransitionTime":"2025-11-21T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.175735 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.175776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.175789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.175811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.175821 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:06Z","lastTransitionTime":"2025-11-21T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.279221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.279278 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.279289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.279312 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.279324 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:06Z","lastTransitionTime":"2025-11-21T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.381875 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.381918 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.381928 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.381946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.381974 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:06Z","lastTransitionTime":"2025-11-21T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.485423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.485471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.485484 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.485502 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.485515 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:06Z","lastTransitionTime":"2025-11-21T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.498927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.499077 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:06 crc kubenswrapper[4727]: E1121 20:08:06.499221 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.499270 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:06 crc kubenswrapper[4727]: E1121 20:08:06.499346 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:06 crc kubenswrapper[4727]: E1121 20:08:06.499651 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.587441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.587504 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.587517 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.587533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.587544 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:06Z","lastTransitionTime":"2025-11-21T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.689571 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.689599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.689607 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.689619 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.689628 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:06Z","lastTransitionTime":"2025-11-21T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.791438 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.791483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.791495 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.791511 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.791521 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:06Z","lastTransitionTime":"2025-11-21T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.894717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.895021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.895155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.895274 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.895555 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:06Z","lastTransitionTime":"2025-11-21T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.998383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.998461 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.998501 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.998524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:06 crc kubenswrapper[4727]: I1121 20:08:06.998536 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:06Z","lastTransitionTime":"2025-11-21T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.101434 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.101472 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.101483 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.101497 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.101507 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:07Z","lastTransitionTime":"2025-11-21T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.204004 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.204042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.204051 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.204064 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.204073 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:07Z","lastTransitionTime":"2025-11-21T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.306727 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.307055 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.307223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.307402 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.307532 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:07Z","lastTransitionTime":"2025-11-21T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.410969 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.411240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.411311 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.411373 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.411445 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:07Z","lastTransitionTime":"2025-11-21T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.498550 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:07 crc kubenswrapper[4727]: E1121 20:08:07.498811 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.513135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.513169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.513177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.513189 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.513202 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:07Z","lastTransitionTime":"2025-11-21T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.615702 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.615945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.616053 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.616148 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.616271 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:07Z","lastTransitionTime":"2025-11-21T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.719721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.720056 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.720161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.720253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.720399 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:07Z","lastTransitionTime":"2025-11-21T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.824250 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.824326 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.824340 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.824358 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.824395 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:07Z","lastTransitionTime":"2025-11-21T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.928398 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.928477 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.928504 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.928543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:07 crc kubenswrapper[4727]: I1121 20:08:07.928584 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:07Z","lastTransitionTime":"2025-11-21T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.032419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.032478 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.032499 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.032529 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.032547 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:08Z","lastTransitionTime":"2025-11-21T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.136117 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.136190 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.136212 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.136246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.136268 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:08Z","lastTransitionTime":"2025-11-21T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.239600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.239666 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.239684 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.239714 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.239733 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:08Z","lastTransitionTime":"2025-11-21T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.342346 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.342444 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.342475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.342512 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.342545 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:08Z","lastTransitionTime":"2025-11-21T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.446291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.446359 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.446380 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.446411 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.446435 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:08Z","lastTransitionTime":"2025-11-21T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.498189 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.498223 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.498250 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:08 crc kubenswrapper[4727]: E1121 20:08:08.498352 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:08 crc kubenswrapper[4727]: E1121 20:08:08.498510 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:08 crc kubenswrapper[4727]: E1121 20:08:08.498581 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.499823 4727 scope.go:117] "RemoveContainer" containerID="1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca" Nov 21 20:08:08 crc kubenswrapper[4727]: E1121 20:08:08.499980 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.549792 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.549836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.549849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.549866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.549877 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:08Z","lastTransitionTime":"2025-11-21T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.653376 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.653437 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.653456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.653482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.653500 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:08Z","lastTransitionTime":"2025-11-21T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.757169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.757670 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.757809 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.758008 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.758172 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:08Z","lastTransitionTime":"2025-11-21T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.861843 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.862105 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.862202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.862289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.862412 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:08Z","lastTransitionTime":"2025-11-21T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.965897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.966022 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.966044 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.966076 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:08 crc kubenswrapper[4727]: I1121 20:08:08.966098 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:08Z","lastTransitionTime":"2025-11-21T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.070192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.070536 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.070614 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.070680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.070738 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:09Z","lastTransitionTime":"2025-11-21T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.173634 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.173688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.173703 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.173726 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.173745 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:09Z","lastTransitionTime":"2025-11-21T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.277323 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.277389 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.277409 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.277441 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.277466 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:09Z","lastTransitionTime":"2025-11-21T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.381876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.382012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.382033 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.382075 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.382106 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:09Z","lastTransitionTime":"2025-11-21T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.485452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.485542 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.485561 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.485594 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.485615 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:09Z","lastTransitionTime":"2025-11-21T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.498295 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:09 crc kubenswrapper[4727]: E1121 20:08:09.498589 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.588711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.589554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.589669 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.589840 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.589993 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:09Z","lastTransitionTime":"2025-11-21T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.694315 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.694413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.694435 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.694465 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.694484 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:09Z","lastTransitionTime":"2025-11-21T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.798772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.799228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.799419 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.799552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.799872 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:09Z","lastTransitionTime":"2025-11-21T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.903550 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.903606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.903626 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.903654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:09 crc kubenswrapper[4727]: I1121 20:08:09.903672 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:09Z","lastTransitionTime":"2025-11-21T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.007262 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.007773 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.007927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.008108 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.008234 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:10Z","lastTransitionTime":"2025-11-21T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.110876 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.110919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.110935 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.110983 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.110997 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:10Z","lastTransitionTime":"2025-11-21T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.213968 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.214027 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.214038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.214054 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.214062 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:10Z","lastTransitionTime":"2025-11-21T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.317657 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.317732 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.317745 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.317767 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.317781 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:10Z","lastTransitionTime":"2025-11-21T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.420710 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.420752 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.420761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.420775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.420785 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:10Z","lastTransitionTime":"2025-11-21T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.499018 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.499060 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.499646 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:10 crc kubenswrapper[4727]: E1121 20:08:10.499810 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:10 crc kubenswrapper[4727]: E1121 20:08:10.500083 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:10 crc kubenswrapper[4727]: E1121 20:08:10.500302 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.523132 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.523184 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.523192 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.523207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.523216 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:10Z","lastTransitionTime":"2025-11-21T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.627042 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.627123 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.627146 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.627180 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.627206 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:10Z","lastTransitionTime":"2025-11-21T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.730722 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.730797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.730815 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.730849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.730870 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:10Z","lastTransitionTime":"2025-11-21T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.834940 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.835388 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.835487 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.835616 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.835720 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:10Z","lastTransitionTime":"2025-11-21T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.939000 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.939513 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.939823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.939930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:10 crc kubenswrapper[4727]: I1121 20:08:10.940091 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:10Z","lastTransitionTime":"2025-11-21T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.044314 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.044806 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.044951 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.045145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.045283 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:11Z","lastTransitionTime":"2025-11-21T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.148289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.148366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.148391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.148423 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.148453 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:11Z","lastTransitionTime":"2025-11-21T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.252173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.252235 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.252257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.252295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.252321 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:11Z","lastTransitionTime":"2025-11-21T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.355718 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.356201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.356334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.356447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.356762 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:11Z","lastTransitionTime":"2025-11-21T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.461100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.461624 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.461733 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.461852 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.462003 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:11Z","lastTransitionTime":"2025-11-21T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.499394 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:11 crc kubenswrapper[4727]: E1121 20:08:11.500604 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.565515 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.566017 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.566111 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.566191 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.566262 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:11Z","lastTransitionTime":"2025-11-21T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.669493 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.669539 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.669552 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.669576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.669590 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:11Z","lastTransitionTime":"2025-11-21T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.773396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.773453 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.773469 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.773495 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.773511 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:11Z","lastTransitionTime":"2025-11-21T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.876447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.876498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.876508 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.876524 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.876534 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:11Z","lastTransitionTime":"2025-11-21T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.979520 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.980012 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.980232 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.980383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:11 crc kubenswrapper[4727]: I1121 20:08:11.980699 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:11Z","lastTransitionTime":"2025-11-21T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.084565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.085221 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.085262 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.085295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.085323 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.188181 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.188269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.188290 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.188320 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.188341 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.291623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.291741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.291763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.291797 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.291821 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.394984 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.395048 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.395063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.395083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.395098 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.498113 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.498150 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:12 crc kubenswrapper[4727]: E1121 20:08:12.498645 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.498306 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.498894 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.498988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.498184 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:12 crc kubenswrapper[4727]: E1121 20:08:12.498743 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.499063 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: E1121 20:08:12.499235 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.499231 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.520445 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:12 crc kubenswrapper[4727]: E1121 20:08:12.520575 4727 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:08:12 crc kubenswrapper[4727]: E1121 20:08:12.520631 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs podName:f8318f96-4402-4567-a432-6cf3897e218d nodeName:}" failed. No retries permitted until 2025-11-21 20:09:16.520611478 +0000 UTC m=+161.706796532 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs") pod "network-metrics-daemon-rs9rv" (UID: "f8318f96-4402-4567-a432-6cf3897e218d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.601357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.601654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.601769 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.601897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.602054 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.705179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.705228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.705240 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.705263 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.705276 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.808270 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.808382 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.808396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.808415 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.808429 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.911397 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.911437 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.911448 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.911465 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.911478 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.925045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.925253 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.925338 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.925445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.925515 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: E1121 20:08:12.940013 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:12Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.944824 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.944937 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.945035 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.945128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.945220 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: E1121 20:08:12.959013 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:12Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.963777 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.963807 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.963822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.963841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.963854 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: E1121 20:08:12.976614 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:12Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.981100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.981277 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.981533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.981747 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:12 crc kubenswrapper[4727]: I1121 20:08:12.981989 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:12Z","lastTransitionTime":"2025-11-21T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:12 crc kubenswrapper[4727]: E1121 20:08:12.998892 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:12Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.003418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.003617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.003704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.003795 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.003880 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:13 crc kubenswrapper[4727]: E1121 20:08:13.017487 4727 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T20:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T20:08:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"32d79a59-978c-42a8-bb0b-33b3c3206f66\\\",\\\"systemUUID\\\":\\\"b99fd01f-0947-456f-ae40-db84d60b2190\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T20:08:13Z is after 2025-08-24T17:21:41Z" Nov 21 20:08:13 crc kubenswrapper[4727]: E1121 20:08:13.017613 4727 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.019601 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.019641 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.019651 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.019668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.019678 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.123077 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.123155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.123173 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.123204 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.123230 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.226229 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.226420 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.226447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.226528 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.226599 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.330315 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.330352 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.330360 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.330375 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.330387 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.432418 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.432466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.432475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.432489 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.432498 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.498286 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:13 crc kubenswrapper[4727]: E1121 20:08:13.498656 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.534431 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.534467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.534476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.534491 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.534503 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.637408 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.637456 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.637466 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.637480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.637489 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.739516 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.740177 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.740385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.740554 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.740739 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.843671 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.843741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.843754 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.843776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.843790 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.946413 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.946761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.946952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.947169 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:13 crc kubenswrapper[4727]: I1121 20:08:13.947367 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:13Z","lastTransitionTime":"2025-11-21T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.051131 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.051195 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.051214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.051242 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.051262 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:14Z","lastTransitionTime":"2025-11-21T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.153841 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.154228 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.154304 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.154503 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.154642 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:14Z","lastTransitionTime":"2025-11-21T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.257515 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.257606 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.257721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.257791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.257811 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:14Z","lastTransitionTime":"2025-11-21T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.360711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.360776 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.360796 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.360823 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.360842 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:14Z","lastTransitionTime":"2025-11-21T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.464066 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.464152 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.464170 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.464201 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.464229 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:14Z","lastTransitionTime":"2025-11-21T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.498792 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.498821 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.498906 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:14 crc kubenswrapper[4727]: E1121 20:08:14.500069 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:14 crc kubenswrapper[4727]: E1121 20:08:14.500270 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:14 crc kubenswrapper[4727]: E1121 20:08:14.500298 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.567410 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.567857 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.568085 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.568220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.568417 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:14Z","lastTransitionTime":"2025-11-21T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.672214 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.672281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.672298 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.672327 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.672345 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:14Z","lastTransitionTime":"2025-11-21T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.775564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.775695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.775715 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.775741 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.775797 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:14Z","lastTransitionTime":"2025-11-21T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.879819 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.879890 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.879902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.879927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.879944 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:14Z","lastTransitionTime":"2025-11-21T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.983246 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.983791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.983906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.984369 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:14 crc kubenswrapper[4727]: I1121 20:08:14.984709 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:14Z","lastTransitionTime":"2025-11-21T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.087834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.088391 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.088600 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.088839 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.089119 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:15Z","lastTransitionTime":"2025-11-21T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.193481 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.194047 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.194471 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.194832 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.195233 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:15Z","lastTransitionTime":"2025-11-21T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.298895 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.299451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.299555 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.299665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.299792 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:15Z","lastTransitionTime":"2025-11-21T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.405652 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.405717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.405737 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.405761 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.405780 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:15Z","lastTransitionTime":"2025-11-21T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.498930 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:15 crc kubenswrapper[4727]: E1121 20:08:15.499420 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.509467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.509536 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.509556 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.509599 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.509620 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:15Z","lastTransitionTime":"2025-11-21T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.557142 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7444p" podStartSLOduration=81.557098287 podStartE2EDuration="1m21.557098287s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.542521354 +0000 UTC m=+100.728706438" watchObservedRunningTime="2025-11-21 20:08:15.557098287 +0000 UTC m=+100.743283371" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.576096 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-drnwx" podStartSLOduration=80.576069668 podStartE2EDuration="1m20.576069668s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.557881036 +0000 UTC m=+100.744066100" watchObservedRunningTime="2025-11-21 20:08:15.576069668 +0000 UTC m=+100.762254722" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.612892 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.612944 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.613014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.613038 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.613050 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:15Z","lastTransitionTime":"2025-11-21T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.614321 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ccfbn" podStartSLOduration=81.614304588 podStartE2EDuration="1m21.614304588s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.613837446 +0000 UTC m=+100.800022500" watchObservedRunningTime="2025-11-21 20:08:15.614304588 +0000 UTC m=+100.800489632" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.668567 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-74crp" podStartSLOduration=81.668533895 podStartE2EDuration="1m21.668533895s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.637625617 +0000 UTC m=+100.823810671" watchObservedRunningTime="2025-11-21 20:08:15.668533895 +0000 UTC m=+100.854718939" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.697464 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podStartSLOduration=81.697429303 podStartE2EDuration="1m21.697429303s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.696949141 +0000 UTC m=+100.883134185" watchObservedRunningTime="2025-11-21 20:08:15.697429303 +0000 UTC m=+100.883614347" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.716665 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.716906 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.716919 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.716943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.716980 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:15Z","lastTransitionTime":"2025-11-21T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.724906 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7rvdc" podStartSLOduration=81.724885795 podStartE2EDuration="1m21.724885795s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.712439016 +0000 UTC m=+100.898624060" watchObservedRunningTime="2025-11-21 20:08:15.724885795 +0000 UTC m=+100.911070839" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.725327 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.725322236 podStartE2EDuration="31.725322236s" podCreationTimestamp="2025-11-21 20:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.725267574 +0000 UTC m=+100.911452618" watchObservedRunningTime="2025-11-21 20:08:15.725322236 +0000 UTC m=+100.911507270" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.749692 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=79.74967228 podStartE2EDuration="1m19.74967228s" podCreationTimestamp="2025-11-21 20:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.748999534 +0000 UTC m=+100.935184588" watchObservedRunningTime="2025-11-21 20:08:15.74967228 +0000 UTC m=+100.935857324" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.783639 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.783609493 podStartE2EDuration="1m20.783609493s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.769361099 +0000 UTC m=+100.955546153" watchObservedRunningTime="2025-11-21 20:08:15.783609493 +0000 UTC m=+100.969794537" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.819791 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.819834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.819845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.819866 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.819879 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:15Z","lastTransitionTime":"2025-11-21T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.834449 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.834427886 podStartE2EDuration="1m19.834427886s" podCreationTimestamp="2025-11-21 20:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.833588515 +0000 UTC m=+101.019773559" watchObservedRunningTime="2025-11-21 20:08:15.834427886 +0000 UTC m=+101.020612930" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.851513 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.85149259 podStartE2EDuration="46.85149259s" podCreationTimestamp="2025-11-21 20:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:15.851347816 +0000 UTC m=+101.037532860" watchObservedRunningTime="2025-11-21 20:08:15.85149259 +0000 UTC m=+101.037677634" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.922036 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.922074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.922083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.922114 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:15 crc kubenswrapper[4727]: I1121 20:08:15.922124 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:15Z","lastTransitionTime":"2025-11-21T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.025563 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.026003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.026097 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.026209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.026295 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:16Z","lastTransitionTime":"2025-11-21T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.129427 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.129945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.130072 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.130163 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.130261 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:16Z","lastTransitionTime":"2025-11-21T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.238176 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.238789 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.238902 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.239019 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.239109 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:16Z","lastTransitionTime":"2025-11-21T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.342262 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.342551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.342663 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.342763 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.342904 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:16Z","lastTransitionTime":"2025-11-21T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.445849 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.446301 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.446396 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.446494 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.446588 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:16Z","lastTransitionTime":"2025-11-21T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.498697 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.498735 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:16 crc kubenswrapper[4727]: E1121 20:08:16.498897 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.498735 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:16 crc kubenswrapper[4727]: E1121 20:08:16.499101 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:16 crc kubenswrapper[4727]: E1121 20:08:16.499290 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.549888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.549936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.549952 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.550021 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.550043 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:16Z","lastTransitionTime":"2025-11-21T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.653570 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.653642 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.653654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.653688 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.653697 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:16Z","lastTransitionTime":"2025-11-21T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.756751 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.756811 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.756826 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.756851 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.756865 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:16Z","lastTransitionTime":"2025-11-21T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.860569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.860667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.860690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.860719 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.860740 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:16Z","lastTransitionTime":"2025-11-21T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.963695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.963770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.963798 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.963831 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:16 crc kubenswrapper[4727]: I1121 20:08:16.963851 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:16Z","lastTransitionTime":"2025-11-21T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.067118 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.067178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.067193 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.067217 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.067230 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:17Z","lastTransitionTime":"2025-11-21T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.170145 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.170209 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.170223 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.170245 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.170258 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:17Z","lastTransitionTime":"2025-11-21T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.273623 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.273680 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.273691 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.273713 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.273730 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:17Z","lastTransitionTime":"2025-11-21T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.376577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.376656 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.376682 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.376717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.376743 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:17Z","lastTransitionTime":"2025-11-21T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.480473 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.480540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.480560 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.480586 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.480606 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:17Z","lastTransitionTime":"2025-11-21T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.498760 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:17 crc kubenswrapper[4727]: E1121 20:08:17.499994 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.584128 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.584166 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.584174 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.584188 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.584200 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:17Z","lastTransitionTime":"2025-11-21T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.687463 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.687511 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.687522 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.687540 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.687552 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:17Z","lastTransitionTime":"2025-11-21T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.789654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.789711 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.789723 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.789739 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.789750 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:17Z","lastTransitionTime":"2025-11-21T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.893460 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.893521 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.893546 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.893577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.893601 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:17Z","lastTransitionTime":"2025-11-21T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.996714 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.996770 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.996782 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.996804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:17 crc kubenswrapper[4727]: I1121 20:08:17.996820 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:17Z","lastTransitionTime":"2025-11-21T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.099837 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.099911 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.099930 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.099988 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.100013 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:18Z","lastTransitionTime":"2025-11-21T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.203844 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.203943 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.204008 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.204049 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.204075 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:18Z","lastTransitionTime":"2025-11-21T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.307775 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.307845 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.307863 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.307893 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.307911 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:18Z","lastTransitionTime":"2025-11-21T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.410721 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.410768 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.410781 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.410804 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.410820 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:18Z","lastTransitionTime":"2025-11-21T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.499077 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.499181 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:18 crc kubenswrapper[4727]: E1121 20:08:18.499388 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.499491 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:18 crc kubenswrapper[4727]: E1121 20:08:18.499574 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:18 crc kubenswrapper[4727]: E1121 20:08:18.499813 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.514098 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.514155 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.514178 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.514211 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.514238 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:18Z","lastTransitionTime":"2025-11-21T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.619091 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.619171 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.619198 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.619230 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.619251 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:18Z","lastTransitionTime":"2025-11-21T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.723457 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.723516 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.723535 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.723561 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.723628 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:18Z","lastTransitionTime":"2025-11-21T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.827885 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.827946 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.828003 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.828031 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.828045 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:18Z","lastTransitionTime":"2025-11-21T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.931332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.931392 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.931406 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.931433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:18 crc kubenswrapper[4727]: I1121 20:08:18.931450 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:18Z","lastTransitionTime":"2025-11-21T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.034564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.034637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.034664 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.034699 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.034725 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:19Z","lastTransitionTime":"2025-11-21T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.138385 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.138430 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.138445 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.138476 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.138491 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:19Z","lastTransitionTime":"2025-11-21T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.241237 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.241281 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.241295 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.241319 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.241334 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:19Z","lastTransitionTime":"2025-11-21T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.345363 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.345474 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.345498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.345533 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.345556 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:19Z","lastTransitionTime":"2025-11-21T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.456165 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.456236 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.456257 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.456288 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.456310 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:19Z","lastTransitionTime":"2025-11-21T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.498798 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:19 crc kubenswrapper[4727]: E1121 20:08:19.499064 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.559611 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.559668 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.559691 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.559717 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.559739 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:19Z","lastTransitionTime":"2025-11-21T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.663354 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.663433 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.663452 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.663480 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.663503 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:19Z","lastTransitionTime":"2025-11-21T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.767167 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.767243 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.767260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.767286 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.767303 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:19Z","lastTransitionTime":"2025-11-21T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.871366 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.871431 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.871451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.871479 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.871498 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:19Z","lastTransitionTime":"2025-11-21T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.974884 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.974936 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.974945 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.974994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:19 crc kubenswrapper[4727]: I1121 20:08:19.975008 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:19Z","lastTransitionTime":"2025-11-21T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.078834 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.078888 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.078897 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.078917 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.078929 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:20Z","lastTransitionTime":"2025-11-21T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.182569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.182628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.182649 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.182676 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.182694 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:20Z","lastTransitionTime":"2025-11-21T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.286667 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.286800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.286836 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.286870 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.286896 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:20Z","lastTransitionTime":"2025-11-21T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.390443 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.390507 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.390545 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.390577 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.390599 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:20Z","lastTransitionTime":"2025-11-21T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.495383 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.495510 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.495544 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.495582 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.495609 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:20Z","lastTransitionTime":"2025-11-21T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.499034 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.499120 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.499073 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:20 crc kubenswrapper[4727]: E1121 20:08:20.499296 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:20 crc kubenswrapper[4727]: E1121 20:08:20.499421 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:20 crc kubenswrapper[4727]: E1121 20:08:20.499629 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.599568 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.599637 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.599654 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.599690 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.599711 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:20Z","lastTransitionTime":"2025-11-21T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.703451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.703522 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.703543 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.703576 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.703598 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:20Z","lastTransitionTime":"2025-11-21T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.807367 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.807451 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.807475 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.807512 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.807541 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:20Z","lastTransitionTime":"2025-11-21T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.911629 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.911695 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.911712 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.911740 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:20 crc kubenswrapper[4727]: I1121 20:08:20.911762 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:20Z","lastTransitionTime":"2025-11-21T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.014882 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.014989 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.015010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.015041 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.015061 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:21Z","lastTransitionTime":"2025-11-21T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.116949 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.117055 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.117074 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.117097 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.117114 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:21Z","lastTransitionTime":"2025-11-21T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.220238 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.220316 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.220334 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.220363 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.220384 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:21Z","lastTransitionTime":"2025-11-21T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.323482 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.323588 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.323609 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.323630 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.323644 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:21Z","lastTransitionTime":"2025-11-21T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.426386 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.426447 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.426467 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.426498 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.426520 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:21Z","lastTransitionTime":"2025-11-21T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.499064 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:21 crc kubenswrapper[4727]: E1121 20:08:21.500029 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.534161 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.534227 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.534239 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.534260 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.534273 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:21Z","lastTransitionTime":"2025-11-21T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.638220 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.638291 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.638318 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.638357 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.638383 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:21Z","lastTransitionTime":"2025-11-21T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.741738 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.741800 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.741822 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.741850 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.741869 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:21Z","lastTransitionTime":"2025-11-21T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.845207 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.845259 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.845269 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.845287 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.845298 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:21Z","lastTransitionTime":"2025-11-21T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.948135 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.948233 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.948248 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.948264 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:21 crc kubenswrapper[4727]: I1121 20:08:21.948307 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:21Z","lastTransitionTime":"2025-11-21T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.050534 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.050569 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.050578 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.050591 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.050600 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:22Z","lastTransitionTime":"2025-11-21T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.152975 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.153014 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.153030 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.153045 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.153056 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:22Z","lastTransitionTime":"2025-11-21T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.256100 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.256166 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.256179 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.256202 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.256216 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:22Z","lastTransitionTime":"2025-11-21T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.359289 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.359378 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.359400 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.359431 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.359452 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:22Z","lastTransitionTime":"2025-11-21T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.469631 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.469681 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.469689 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.469704 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.469712 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:22Z","lastTransitionTime":"2025-11-21T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.499192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.499236 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.499222 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:22 crc kubenswrapper[4727]: E1121 20:08:22.500020 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:22 crc kubenswrapper[4727]: E1121 20:08:22.500128 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:22 crc kubenswrapper[4727]: E1121 20:08:22.500535 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.500843 4727 scope.go:117] "RemoveContainer" containerID="1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca" Nov 21 20:08:22 crc kubenswrapper[4727]: E1121 20:08:22.501521 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tfd4j_openshift-ovn-kubernetes(70d2ca13-a8f7-43dc-8ad0-142d99ccde18)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.573564 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.573615 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.573625 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.573646 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.573656 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:22Z","lastTransitionTime":"2025-11-21T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.677002 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.677057 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.677067 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.677083 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.677094 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:22Z","lastTransitionTime":"2025-11-21T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.780927 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.781010 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.781023 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.781043 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.781059 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:22Z","lastTransitionTime":"2025-11-21T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.884254 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.884313 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.884332 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.884356 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.884372 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:22Z","lastTransitionTime":"2025-11-21T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.988565 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.988622 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.988640 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.988669 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:22 crc kubenswrapper[4727]: I1121 20:08:22.988688 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:22Z","lastTransitionTime":"2025-11-21T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.094859 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.094933 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.094994 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.095029 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.095056 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:23Z","lastTransitionTime":"2025-11-21T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.141551 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.141617 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.141628 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.141653 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.141664 4727 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T20:08:23Z","lastTransitionTime":"2025-11-21T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.191196 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2"] Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.191632 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.194210 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.194677 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.195246 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.195531 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.241116 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cce667d7-a003-46df-b56c-e493a6ee80b8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.241409 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce667d7-a003-46df-b56c-e493a6ee80b8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.241482 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cce667d7-a003-46df-b56c-e493a6ee80b8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.241536 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cce667d7-a003-46df-b56c-e493a6ee80b8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.241604 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cce667d7-a003-46df-b56c-e493a6ee80b8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.342844 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cce667d7-a003-46df-b56c-e493a6ee80b8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.343113 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce667d7-a003-46df-b56c-e493a6ee80b8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.343038 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cce667d7-a003-46df-b56c-e493a6ee80b8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.343257 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cce667d7-a003-46df-b56c-e493a6ee80b8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.343308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cce667d7-a003-46df-b56c-e493a6ee80b8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.343379 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cce667d7-a003-46df-b56c-e493a6ee80b8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.343537 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cce667d7-a003-46df-b56c-e493a6ee80b8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.344411 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cce667d7-a003-46df-b56c-e493a6ee80b8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.351351 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cce667d7-a003-46df-b56c-e493a6ee80b8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.366783 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce667d7-a003-46df-b56c-e493a6ee80b8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sqng2\" (UID: \"cce667d7-a003-46df-b56c-e493a6ee80b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.498670 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:23 crc kubenswrapper[4727]: E1121 20:08:23.498928 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:23 crc kubenswrapper[4727]: I1121 20:08:23.506837 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" Nov 21 20:08:24 crc kubenswrapper[4727]: I1121 20:08:24.004765 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" event={"ID":"cce667d7-a003-46df-b56c-e493a6ee80b8","Type":"ContainerStarted","Data":"39b103b3c556e5511211d49ec3f26d98c2489e0ab3509a95ab210035d68da833"} Nov 21 20:08:24 crc kubenswrapper[4727]: I1121 20:08:24.004831 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" event={"ID":"cce667d7-a003-46df-b56c-e493a6ee80b8","Type":"ContainerStarted","Data":"435ab0476d543af2663d118b6a34efc1211f2a193852b7b587c7e047e1e80612"} Nov 21 20:08:24 crc kubenswrapper[4727]: I1121 20:08:24.023775 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sqng2" podStartSLOduration=90.023750559 podStartE2EDuration="1m30.023750559s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:24.023062461 +0000 UTC m=+109.209247525" watchObservedRunningTime="2025-11-21 20:08:24.023750559 +0000 UTC m=+109.209935603" Nov 21 20:08:24 crc kubenswrapper[4727]: I1121 20:08:24.498733 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:24 crc kubenswrapper[4727]: E1121 20:08:24.499019 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:24 crc kubenswrapper[4727]: I1121 20:08:24.499395 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:24 crc kubenswrapper[4727]: I1121 20:08:24.499501 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:24 crc kubenswrapper[4727]: E1121 20:08:24.499757 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:24 crc kubenswrapper[4727]: E1121 20:08:24.499531 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:25 crc kubenswrapper[4727]: I1121 20:08:25.498767 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:25 crc kubenswrapper[4727]: E1121 20:08:25.499724 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:26 crc kubenswrapper[4727]: I1121 20:08:26.498773 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:26 crc kubenswrapper[4727]: I1121 20:08:26.498780 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:26 crc kubenswrapper[4727]: E1121 20:08:26.499376 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:26 crc kubenswrapper[4727]: E1121 20:08:26.499525 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:26 crc kubenswrapper[4727]: I1121 20:08:26.499700 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:26 crc kubenswrapper[4727]: E1121 20:08:26.499861 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:27 crc kubenswrapper[4727]: I1121 20:08:27.499036 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:27 crc kubenswrapper[4727]: E1121 20:08:27.499647 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:28 crc kubenswrapper[4727]: I1121 20:08:28.499224 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:28 crc kubenswrapper[4727]: I1121 20:08:28.499354 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:28 crc kubenswrapper[4727]: I1121 20:08:28.499483 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:28 crc kubenswrapper[4727]: E1121 20:08:28.499556 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:28 crc kubenswrapper[4727]: E1121 20:08:28.499476 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:28 crc kubenswrapper[4727]: E1121 20:08:28.499618 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:29 crc kubenswrapper[4727]: I1121 20:08:29.027240 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7rvdc_07dba644-eb6f-45c3-b373-7a1610c569aa/kube-multus/1.log" Nov 21 20:08:29 crc kubenswrapper[4727]: I1121 20:08:29.028189 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7rvdc_07dba644-eb6f-45c3-b373-7a1610c569aa/kube-multus/0.log" Nov 21 20:08:29 crc kubenswrapper[4727]: I1121 20:08:29.028294 4727 generic.go:334] "Generic (PLEG): container finished" podID="07dba644-eb6f-45c3-b373-7a1610c569aa" containerID="aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb" exitCode=1 Nov 21 20:08:29 crc kubenswrapper[4727]: I1121 20:08:29.028347 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7rvdc" event={"ID":"07dba644-eb6f-45c3-b373-7a1610c569aa","Type":"ContainerDied","Data":"aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb"} Nov 21 20:08:29 crc kubenswrapper[4727]: I1121 20:08:29.028400 4727 scope.go:117] "RemoveContainer" containerID="7fc26f04e78b8405547a4fa225524347eabce94dfa5b4bf2448db66e36aaf006" Nov 21 20:08:29 crc kubenswrapper[4727]: I1121 20:08:29.029476 4727 scope.go:117] "RemoveContainer" containerID="aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb" Nov 21 20:08:29 crc kubenswrapper[4727]: E1121 20:08:29.030930 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-7rvdc_openshift-multus(07dba644-eb6f-45c3-b373-7a1610c569aa)\"" pod="openshift-multus/multus-7rvdc" podUID="07dba644-eb6f-45c3-b373-7a1610c569aa" Nov 21 20:08:29 crc kubenswrapper[4727]: I1121 20:08:29.498414 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:29 crc kubenswrapper[4727]: E1121 20:08:29.498621 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:30 crc kubenswrapper[4727]: I1121 20:08:30.034117 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7rvdc_07dba644-eb6f-45c3-b373-7a1610c569aa/kube-multus/1.log" Nov 21 20:08:30 crc kubenswrapper[4727]: I1121 20:08:30.498601 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:30 crc kubenswrapper[4727]: I1121 20:08:30.498665 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:30 crc kubenswrapper[4727]: I1121 20:08:30.498598 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:30 crc kubenswrapper[4727]: E1121 20:08:30.498763 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:30 crc kubenswrapper[4727]: E1121 20:08:30.498829 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:30 crc kubenswrapper[4727]: E1121 20:08:30.499003 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:31 crc kubenswrapper[4727]: I1121 20:08:31.499109 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:31 crc kubenswrapper[4727]: E1121 20:08:31.499906 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:32 crc kubenswrapper[4727]: I1121 20:08:32.498672 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:32 crc kubenswrapper[4727]: I1121 20:08:32.498716 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:32 crc kubenswrapper[4727]: E1121 20:08:32.498774 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:32 crc kubenswrapper[4727]: E1121 20:08:32.498894 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:32 crc kubenswrapper[4727]: I1121 20:08:32.498672 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:32 crc kubenswrapper[4727]: E1121 20:08:32.498991 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:33 crc kubenswrapper[4727]: I1121 20:08:33.499118 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:33 crc kubenswrapper[4727]: E1121 20:08:33.499385 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:34 crc kubenswrapper[4727]: I1121 20:08:34.498698 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:34 crc kubenswrapper[4727]: I1121 20:08:34.498824 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:34 crc kubenswrapper[4727]: E1121 20:08:34.498885 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:34 crc kubenswrapper[4727]: I1121 20:08:34.499038 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:34 crc kubenswrapper[4727]: E1121 20:08:34.499241 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:34 crc kubenswrapper[4727]: E1121 20:08:34.499368 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:35 crc kubenswrapper[4727]: E1121 20:08:35.476377 4727 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 21 20:08:35 crc kubenswrapper[4727]: I1121 20:08:35.498750 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:35 crc kubenswrapper[4727]: E1121 20:08:35.501167 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:35 crc kubenswrapper[4727]: E1121 20:08:35.592799 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:08:36 crc kubenswrapper[4727]: I1121 20:08:36.498267 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:36 crc kubenswrapper[4727]: E1121 20:08:36.498388 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:36 crc kubenswrapper[4727]: I1121 20:08:36.498596 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:36 crc kubenswrapper[4727]: E1121 20:08:36.498656 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:36 crc kubenswrapper[4727]: I1121 20:08:36.498798 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:36 crc kubenswrapper[4727]: E1121 20:08:36.499250 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:36 crc kubenswrapper[4727]: I1121 20:08:36.499517 4727 scope.go:117] "RemoveContainer" containerID="1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca" Nov 21 20:08:37 crc kubenswrapper[4727]: I1121 20:08:37.061839 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/3.log" Nov 21 20:08:37 crc kubenswrapper[4727]: I1121 20:08:37.065197 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerStarted","Data":"debe3de5c2229e322e3a60e6669b1769dcb37d47d93bdd8da29c3c465418f88c"} Nov 21 20:08:37 crc kubenswrapper[4727]: I1121 20:08:37.065735 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:08:37 crc kubenswrapper[4727]: I1121 20:08:37.092621 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podStartSLOduration=103.092603252 podStartE2EDuration="1m43.092603252s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:37.092006107 +0000 UTC m=+122.278191151" watchObservedRunningTime="2025-11-21 20:08:37.092603252 +0000 UTC m=+122.278788296" Nov 21 20:08:37 crc kubenswrapper[4727]: I1121 20:08:37.387741 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rs9rv"] Nov 21 20:08:37 crc kubenswrapper[4727]: I1121 20:08:37.387836 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:37 crc kubenswrapper[4727]: E1121 20:08:37.387914 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:37 crc kubenswrapper[4727]: I1121 20:08:37.499096 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:37 crc kubenswrapper[4727]: E1121 20:08:37.499218 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:38 crc kubenswrapper[4727]: I1121 20:08:38.498787 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:38 crc kubenswrapper[4727]: I1121 20:08:38.498864 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:38 crc kubenswrapper[4727]: I1121 20:08:38.498851 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:38 crc kubenswrapper[4727]: E1121 20:08:38.499089 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:38 crc kubenswrapper[4727]: E1121 20:08:38.499258 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:38 crc kubenswrapper[4727]: E1121 20:08:38.499406 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:39 crc kubenswrapper[4727]: I1121 20:08:39.498599 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:39 crc kubenswrapper[4727]: E1121 20:08:39.498762 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:40 crc kubenswrapper[4727]: I1121 20:08:40.498159 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:40 crc kubenswrapper[4727]: I1121 20:08:40.498205 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:40 crc kubenswrapper[4727]: I1121 20:08:40.498290 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:40 crc kubenswrapper[4727]: I1121 20:08:40.498548 4727 scope.go:117] "RemoveContainer" containerID="aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb" Nov 21 20:08:40 crc kubenswrapper[4727]: E1121 20:08:40.498576 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:40 crc kubenswrapper[4727]: E1121 20:08:40.498738 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:40 crc kubenswrapper[4727]: E1121 20:08:40.498851 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:40 crc kubenswrapper[4727]: E1121 20:08:40.604171 4727 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:08:41 crc kubenswrapper[4727]: I1121 20:08:41.080715 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7rvdc_07dba644-eb6f-45c3-b373-7a1610c569aa/kube-multus/1.log" Nov 21 20:08:41 crc kubenswrapper[4727]: I1121 20:08:41.081060 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7rvdc" event={"ID":"07dba644-eb6f-45c3-b373-7a1610c569aa","Type":"ContainerStarted","Data":"c0d56f4e8a0bf7ace78cc9404a73f24132ec2d0b20654b1aa5db4cab0db74936"} Nov 21 20:08:41 crc kubenswrapper[4727]: I1121 20:08:41.498418 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:41 crc kubenswrapper[4727]: E1121 20:08:41.498571 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:42 crc kubenswrapper[4727]: I1121 20:08:42.499303 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:42 crc kubenswrapper[4727]: I1121 20:08:42.499346 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:42 crc kubenswrapper[4727]: I1121 20:08:42.499346 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:42 crc kubenswrapper[4727]: E1121 20:08:42.499616 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:42 crc kubenswrapper[4727]: E1121 20:08:42.499766 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:42 crc kubenswrapper[4727]: E1121 20:08:42.499861 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:43 crc kubenswrapper[4727]: I1121 20:08:43.498876 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:43 crc kubenswrapper[4727]: E1121 20:08:43.499112 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:44 crc kubenswrapper[4727]: I1121 20:08:44.499034 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:44 crc kubenswrapper[4727]: I1121 20:08:44.499132 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:44 crc kubenswrapper[4727]: I1121 20:08:44.499054 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:44 crc kubenswrapper[4727]: E1121 20:08:44.499265 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rs9rv" podUID="f8318f96-4402-4567-a432-6cf3897e218d" Nov 21 20:08:44 crc kubenswrapper[4727]: E1121 20:08:44.499389 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 20:08:44 crc kubenswrapper[4727]: E1121 20:08:44.499460 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 20:08:45 crc kubenswrapper[4727]: I1121 20:08:45.498257 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:45 crc kubenswrapper[4727]: E1121 20:08:45.499346 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 20:08:46 crc kubenswrapper[4727]: I1121 20:08:46.498225 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:08:46 crc kubenswrapper[4727]: I1121 20:08:46.498206 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:08:46 crc kubenswrapper[4727]: I1121 20:08:46.498276 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:08:46 crc kubenswrapper[4727]: I1121 20:08:46.501128 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 21 20:08:46 crc kubenswrapper[4727]: I1121 20:08:46.501470 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 21 20:08:46 crc kubenswrapper[4727]: I1121 20:08:46.502727 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 21 20:08:46 crc kubenswrapper[4727]: I1121 20:08:46.502845 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 21 20:08:46 crc kubenswrapper[4727]: I1121 20:08:46.503001 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 21 20:08:46 crc kubenswrapper[4727]: I1121 20:08:46.503151 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 21 20:08:47 crc kubenswrapper[4727]: I1121 20:08:47.499092 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.786772 4727 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.828710 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4855z"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.829224 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.833552 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.834013 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.834133 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.834321 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.834444 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.834521 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.834728 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.836582 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fwqq7"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.843499 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.844335 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.844318 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.853769 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.854058 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.854170 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hjqgw"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.855422 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.855524 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.856114 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fj2k4"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.861846 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862171 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862289 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862410 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862452 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862419 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862566 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862704 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862814 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862886 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862919 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.862813 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.863205 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.863331 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.863402 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.863434 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.863548 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.863365 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zqbhr"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.863654 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.863690 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.864542 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.865114 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.865416 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.868130 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hx72f"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.868165 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.868558 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.869043 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.869100 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.870287 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-nzpzh"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.870877 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.870925 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nzpzh" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.870946 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872263 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872442 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872625 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872663 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872671 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872701 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872639 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872766 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872850 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872888 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872921 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.872977 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.873063 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.873087 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.873103 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.873181 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.873203 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.873308 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.873377 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.873308 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.874081 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.879934 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.880302 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.880709 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.881119 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.881124 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.881221 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.881902 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.894691 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dw7dg"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.895484 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.896397 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.899749 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.901099 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.902452 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.902651 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.902782 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.903365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.902780 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.910380 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.910878 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.912246 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fn2j6"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.912641 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.913308 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.913598 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.914501 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.914691 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.914855 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.914947 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.916537 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.916681 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.916764 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.916846 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.917044 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.917245 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.917355 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.917490 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.917757 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.918045 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.918230 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.918428 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.918590 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.918747 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.918923 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.919732 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.919798 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.920373 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.919830 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.919935 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.920240 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.920752 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.920490 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.920584 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.920698 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.920972 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.921003 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.921494 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.922331 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.922770 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.922925 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.923065 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.923076 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.923164 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.923218 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.923313 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.923437 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.923566 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.923676 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.924082 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.924442 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.924518 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tlqhq"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.924851 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.925044 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jq87g"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.925687 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.925904 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.931388 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-42c64"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.932705 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.933466 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.933890 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.936546 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.936932 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.938168 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.938326 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.938860 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.939210 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.939292 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.941399 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.941551 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.944916 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.955791 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.956276 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.963720 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.963755 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.969292 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-97ctf"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.971306 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.974816 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.977930 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.978720 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.978923 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.979217 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.984585 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.984785 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.994229 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.994685 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.994734 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58a72b98-691c-4da3-a1df-5cdc793b9ff5-config\") pod \"kube-controller-manager-operator-78b949d7b-vtwcr\" (UID: \"58a72b98-691c-4da3-a1df-5cdc793b9ff5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.994765 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fd5128ab-6daa-4e73-b4ab-7a1ab98962fa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4tx6m\" (UID: \"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.994782 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.994708 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.997052 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7hlm6"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.997449 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xpjqs"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.997903 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4855z"] Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.998021 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.998620 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:53 crc kubenswrapper[4727]: I1121 20:08:53.994788 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/385471ac-e2f4-478f-a64e-8cb60d37cdd7-srv-cert\") pod \"catalog-operator-68c6474976-pcvjk\" (UID: \"385471ac-e2f4-478f-a64e-8cb60d37cdd7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:53.999035 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:53.999129 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:53.998997 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz24f\" (UniqueName: \"kubernetes.io/projected/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-kube-api-access-hz24f\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000510 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59rqv\" (UniqueName: \"kubernetes.io/projected/385471ac-e2f4-478f-a64e-8cb60d37cdd7-kube-api-access-59rqv\") pod \"catalog-operator-68c6474976-pcvjk\" (UID: \"385471ac-e2f4-478f-a64e-8cb60d37cdd7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000539 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fd5128ab-6daa-4e73-b4ab-7a1ab98962fa-srv-cert\") pod \"olm-operator-6b444d44fb-4tx6m\" (UID: \"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000583 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c900b720-71c1-41ac-bf6d-f554450c4d44-metrics-tls\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000601 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-oauth-serving-cert\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000631 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-config\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000651 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-audit\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000678 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-console-config\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000696 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b15f63-53a9-40d3-940a-fe8640ebecab-config\") pod \"kube-apiserver-operator-766d6c64bb-qrsz4\" (UID: \"b9b15f63-53a9-40d3-940a-fe8640ebecab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000720 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a175fd16-a4b3-4df9-88f7-3110c4d6c40f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wmvqq\" (UID: \"a175fd16-a4b3-4df9-88f7-3110c4d6c40f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v89fw\" (UniqueName: \"kubernetes.io/projected/36ff6a87-1405-42da-9fd2-5bf32fa6578d-kube-api-access-v89fw\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000764 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000781 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-service-ca\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000804 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b15f63-53a9-40d3-940a-fe8640ebecab-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qrsz4\" (UID: \"b9b15f63-53a9-40d3-940a-fe8640ebecab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000826 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/200ac8c1-4bf1-4356-8091-9279fc08523f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pv5wp\" (UID: \"200ac8c1-4bf1-4356-8091-9279fc08523f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000847 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkd2q\" (UniqueName: \"kubernetes.io/projected/b49f037a-e7ec-45ef-846b-79ab549adb90-kube-api-access-hkd2q\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000868 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xg5n\" (UniqueName: \"kubernetes.io/projected/208f96f5-b245-4b8d-96d0-5210189d0f13-kube-api-access-7xg5n\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000889 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l724\" (UniqueName: \"kubernetes.io/projected/1599f256-87d8-47f4-a6fb-6cea3b58b242-kube-api-access-5l724\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000910 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd2bb95c-fa68-47b4-bb37-1f1724773a74-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000933 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900a1b4b-fc56-4edd-b115-bbd76db83b12-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lqm45\" (UID: \"900a1b4b-fc56-4edd-b115-bbd76db83b12\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.000952 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/385471ac-e2f4-478f-a64e-8cb60d37cdd7-profile-collector-cert\") pod \"catalog-operator-68c6474976-pcvjk\" (UID: \"385471ac-e2f4-478f-a64e-8cb60d37cdd7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001007 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-config\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001054 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26865f5b-6d04-418c-9092-6e1853bc9c88-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4m75\" (UID: \"26865f5b-6d04-418c-9092-6e1853bc9c88\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001082 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58dc7f0b-2626-478d-a541-511adc47db56-config\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001106 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xktc\" (UniqueName: \"kubernetes.io/projected/cafc19d0-a511-4cea-bd92-c28a18224e9f-kube-api-access-9xktc\") pod \"openshift-config-operator-7777fb866f-vbzh7\" (UID: \"cafc19d0-a511-4cea-bd92-c28a18224e9f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001127 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208f96f5-b245-4b8d-96d0-5210189d0f13-serving-cert\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001140 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001157 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsc2p\" (UniqueName: \"kubernetes.io/projected/26865f5b-6d04-418c-9092-6e1853bc9c88-kube-api-access-tsc2p\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4m75\" (UID: \"26865f5b-6d04-418c-9092-6e1853bc9c88\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001183 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ndk\" (UniqueName: \"kubernetes.io/projected/e3860251-af35-4f12-81ce-91855c94d8c8-kube-api-access-q4ndk\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001228 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001251 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001273 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001298 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smqdc\" (UniqueName: \"kubernetes.io/projected/900a1b4b-fc56-4edd-b115-bbd76db83b12-kube-api-access-smqdc\") pod \"openshift-apiserver-operator-796bbdcf4f-lqm45\" (UID: \"900a1b4b-fc56-4edd-b115-bbd76db83b12\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001326 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5nst\" (UniqueName: \"kubernetes.io/projected/200ac8c1-4bf1-4356-8091-9279fc08523f-kube-api-access-g5nst\") pod \"cluster-samples-operator-665b6dd947-pv5wp\" (UID: \"200ac8c1-4bf1-4356-8091-9279fc08523f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001345 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-policies\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001363 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-dir\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001385 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7jqr\" (UniqueName: \"kubernetes.io/projected/fd5128ab-6daa-4e73-b4ab-7a1ab98962fa-kube-api-access-l7jqr\") pod \"olm-operator-6b444d44fb-4tx6m\" (UID: \"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001415 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c900b720-71c1-41ac-bf6d-f554450c4d44-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001458 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-client-ca\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001485 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e951776-970a-49b9-8f34-b2fd129bbc39-serving-cert\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001512 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/58dc7f0b-2626-478d-a541-511adc47db56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001528 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3860251-af35-4f12-81ce-91855c94d8c8-service-ca-bundle\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001544 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-image-import-ca\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001561 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b49f037a-e7ec-45ef-846b-79ab549adb90-serving-cert\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001622 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b49f037a-e7ec-45ef-846b-79ab549adb90-node-pullsecrets\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001641 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-serving-cert\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001670 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cafc19d0-a511-4cea-bd92-c28a18224e9f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vbzh7\" (UID: \"cafc19d0-a511-4cea-bd92-c28a18224e9f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001685 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b49f037a-e7ec-45ef-846b-79ab549adb90-audit-dir\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001701 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001720 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjzjb\" (UniqueName: \"kubernetes.io/projected/c900b720-71c1-41ac-bf6d-f554450c4d44-kube-api-access-tjzjb\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001735 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs7fm\" (UniqueName: \"kubernetes.io/projected/4e951776-970a-49b9-8f34-b2fd129bbc39-kube-api-access-bs7fm\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001760 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/208f96f5-b245-4b8d-96d0-5210189d0f13-service-ca-bundle\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001779 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9b15f63-53a9-40d3-940a-fe8640ebecab-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qrsz4\" (UID: \"b9b15f63-53a9-40d3-940a-fe8640ebecab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001795 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-audit-dir\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001809 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-serving-cert\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001835 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36ff6a87-1405-42da-9fd2-5bf32fa6578d-auth-proxy-config\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001856 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-etcd-serving-ca\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001906 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/36ff6a87-1405-42da-9fd2-5bf32fa6578d-machine-approver-tls\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001978 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208f96f5-b245-4b8d-96d0-5210189d0f13-config\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.001997 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvkqk\" (UniqueName: \"kubernetes.io/projected/58dc7f0b-2626-478d-a541-511adc47db56-kube-api-access-zvkqk\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002015 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w8xp\" (UniqueName: \"kubernetes.io/projected/3f3303e9-97ef-4e9b-9dc0-076066682c43-kube-api-access-5w8xp\") pod \"downloads-7954f5f757-nzpzh\" (UID: \"3f3303e9-97ef-4e9b-9dc0-076066682c43\") " pod="openshift-console/downloads-7954f5f757-nzpzh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002038 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd2bb95c-fa68-47b4-bb37-1f1724773a74-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002056 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-trusted-ca-bundle\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002071 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/58dc7f0b-2626-478d-a541-511adc47db56-images\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002106 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002325 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/900a1b4b-fc56-4edd-b115-bbd76db83b12-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lqm45\" (UID: \"900a1b4b-fc56-4edd-b115-bbd76db83b12\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-etcd-client\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002373 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crm4z\" (UniqueName: \"kubernetes.io/projected/fd2bb95c-fa68-47b4-bb37-1f1724773a74-kube-api-access-crm4z\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002389 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmsg4\" (UniqueName: \"kubernetes.io/projected/a175fd16-a4b3-4df9-88f7-3110c4d6c40f-kube-api-access-zmsg4\") pod \"package-server-manager-789f6589d5-wmvqq\" (UID: \"a175fd16-a4b3-4df9-88f7-3110c4d6c40f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002408 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b49f037a-e7ec-45ef-846b-79ab549adb90-encryption-config\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002422 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-serving-cert\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002442 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ff6a87-1405-42da-9fd2-5bf32fa6578d-config\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002481 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/208f96f5-b245-4b8d-96d0-5210189d0f13-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002585 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-encryption-config\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002601 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-oauth-config\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002626 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002663 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26865f5b-6d04-418c-9092-6e1853bc9c88-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4m75\" (UID: \"26865f5b-6d04-418c-9092-6e1853bc9c88\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002684 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002715 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e3860251-af35-4f12-81ce-91855c94d8c8-default-certificate\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002730 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3860251-af35-4f12-81ce-91855c94d8c8-metrics-certs\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002745 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b49f037a-e7ec-45ef-846b-79ab549adb90-etcd-client\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002770 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cafc19d0-a511-4cea-bd92-c28a18224e9f-serving-cert\") pod \"openshift-config-operator-7777fb866f-vbzh7\" (UID: \"cafc19d0-a511-4cea-bd92-c28a18224e9f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002800 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svdnq\" (UniqueName: \"kubernetes.io/projected/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-kube-api-access-svdnq\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002832 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-client-ca\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002850 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58a72b98-691c-4da3-a1df-5cdc793b9ff5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vtwcr\" (UID: \"58a72b98-691c-4da3-a1df-5cdc793b9ff5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002865 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-config\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002895 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-audit-policies\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002911 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fd2bb95c-fa68-47b4-bb37-1f1724773a74-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002924 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c900b720-71c1-41ac-bf6d-f554450c4d44-trusted-ca\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.002938 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e3860251-af35-4f12-81ce-91855c94d8c8-stats-auth\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.003033 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.003057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9fkv\" (UniqueName: \"kubernetes.io/projected/dacc1ea4-7062-46ad-a784-70537e92dc51-kube-api-access-r9fkv\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.003072 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58a72b98-691c-4da3-a1df-5cdc793b9ff5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vtwcr\" (UID: \"58a72b98-691c-4da3-a1df-5cdc793b9ff5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.003098 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.003283 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hjqgw"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.016643 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.016787 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hx72f"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.016857 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fj2k4"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.018816 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.029374 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.029660 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zqbhr"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.032567 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fwqq7"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.034617 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.034943 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.038121 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lvhwh"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.039484 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.041339 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.042824 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.043175 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.045912 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.046007 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.046023 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.049947 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.050011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.050026 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nzpzh"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.052444 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.053683 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tlqhq"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.060630 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-97ctf"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.061719 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.063760 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.067013 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.069326 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fn2j6"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.072073 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.075990 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.076446 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7hlm6"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.081695 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-g4rwm"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.082326 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.082361 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.082447 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.084944 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.095846 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.099092 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jq87g"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.100931 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-42c64"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.103950 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-config\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104009 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26865f5b-6d04-418c-9092-6e1853bc9c88-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4m75\" (UID: \"26865f5b-6d04-418c-9092-6e1853bc9c88\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104039 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-98qwb\" (UID: \"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xktc\" (UniqueName: \"kubernetes.io/projected/cafc19d0-a511-4cea-bd92-c28a18224e9f-kube-api-access-9xktc\") pod \"openshift-config-operator-7777fb866f-vbzh7\" (UID: \"cafc19d0-a511-4cea-bd92-c28a18224e9f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104143 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104163 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4ndk\" (UniqueName: \"kubernetes.io/projected/e3860251-af35-4f12-81ce-91855c94d8c8-kube-api-access-q4ndk\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104211 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104262 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-ca\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104292 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-policies\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104311 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-dir\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104330 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdjxh\" (UniqueName: \"kubernetes.io/projected/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-kube-api-access-bdjxh\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104353 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-image-import-ca\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104369 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3860251-af35-4f12-81ce-91855c94d8c8-service-ca-bundle\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104391 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b49f037a-e7ec-45ef-846b-79ab549adb90-node-pullsecrets\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104406 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-serving-cert\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104423 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a931e936-fb41-4b5a-b2dd-506cd7cec66c-webhook-cert\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104444 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b49f037a-e7ec-45ef-846b-79ab549adb90-audit-dir\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104459 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104476 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/208f96f5-b245-4b8d-96d0-5210189d0f13-service-ca-bundle\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104491 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/343653aa-5654-4622-aa3e-045685abb471-config\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104507 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36ff6a87-1405-42da-9fd2-5bf32fa6578d-auth-proxy-config\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104524 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-etcd-serving-ca\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/36ff6a87-1405-42da-9fd2-5bf32fa6578d-machine-approver-tls\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104557 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208f96f5-b245-4b8d-96d0-5210189d0f13-config\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104573 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w8xp\" (UniqueName: \"kubernetes.io/projected/3f3303e9-97ef-4e9b-9dc0-076066682c43-kube-api-access-5w8xp\") pod \"downloads-7954f5f757-nzpzh\" (UID: \"3f3303e9-97ef-4e9b-9dc0-076066682c43\") " pod="openshift-console/downloads-7954f5f757-nzpzh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-trusted-ca-bundle\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-etcd-client\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104639 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmsg4\" (UniqueName: \"kubernetes.io/projected/a175fd16-a4b3-4df9-88f7-3110c4d6c40f-kube-api-access-zmsg4\") pod \"package-server-manager-789f6589d5-wmvqq\" (UID: \"a175fd16-a4b3-4df9-88f7-3110c4d6c40f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104654 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b49f037a-e7ec-45ef-846b-79ab549adb90-encryption-config\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104669 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-serving-cert\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104687 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ff6a87-1405-42da-9fd2-5bf32fa6578d-config\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104706 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/208f96f5-b245-4b8d-96d0-5210189d0f13-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104721 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-encryption-config\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104737 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9gp\" (UniqueName: \"kubernetes.io/projected/343653aa-5654-4622-aa3e-045685abb471-kube-api-access-hw9gp\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104753 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26865f5b-6d04-418c-9092-6e1853bc9c88-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4m75\" (UID: \"26865f5b-6d04-418c-9092-6e1853bc9c88\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104768 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b49f037a-e7ec-45ef-846b-79ab549adb90-etcd-client\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104784 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cafc19d0-a511-4cea-bd92-c28a18224e9f-serving-cert\") pod \"openshift-config-operator-7777fb866f-vbzh7\" (UID: \"cafc19d0-a511-4cea-bd92-c28a18224e9f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104799 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104817 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvmx8\" (UniqueName: \"kubernetes.io/projected/3ed0ff62-2542-410b-ac29-904eb08bef16-kube-api-access-hvmx8\") pod \"control-plane-machine-set-operator-78cbb6b69f-vhwbd\" (UID: \"3ed0ff62-2542-410b-ac29-904eb08bef16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104833 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104849 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-audit-policies\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104863 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104877 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104892 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58a72b98-691c-4da3-a1df-5cdc793b9ff5-config\") pod \"kube-controller-manager-operator-78b949d7b-vtwcr\" (UID: \"58a72b98-691c-4da3-a1df-5cdc793b9ff5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104907 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fd5128ab-6daa-4e73-b4ab-7a1ab98962fa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4tx6m\" (UID: \"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104923 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5191a87d-ec94-4c7f-95eb-5898535d524b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25c8d\" (UID: \"5191a87d-ec94-4c7f-95eb-5898535d524b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104970 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-console-config\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.104986 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b15f63-53a9-40d3-940a-fe8640ebecab-config\") pod \"kube-apiserver-operator-766d6c64bb-qrsz4\" (UID: \"b9b15f63-53a9-40d3-940a-fe8640ebecab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105001 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105014 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-service-ca\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105029 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ed0ff62-2542-410b-ac29-904eb08bef16-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vhwbd\" (UID: \"3ed0ff62-2542-410b-ac29-904eb08bef16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105043 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-config\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105059 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900a1b4b-fc56-4edd-b115-bbd76db83b12-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lqm45\" (UID: \"900a1b4b-fc56-4edd-b115-bbd76db83b12\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/385471ac-e2f4-478f-a64e-8cb60d37cdd7-profile-collector-cert\") pod \"catalog-operator-68c6474976-pcvjk\" (UID: \"385471ac-e2f4-478f-a64e-8cb60d37cdd7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58dc7f0b-2626-478d-a541-511adc47db56-config\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105406 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-98qwb\" (UID: \"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105430 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208f96f5-b245-4b8d-96d0-5210189d0f13-serving-cert\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105446 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsc2p\" (UniqueName: \"kubernetes.io/projected/26865f5b-6d04-418c-9092-6e1853bc9c88-kube-api-access-tsc2p\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4m75\" (UID: \"26865f5b-6d04-418c-9092-6e1853bc9c88\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105463 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105479 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smqdc\" (UniqueName: \"kubernetes.io/projected/900a1b4b-fc56-4edd-b115-bbd76db83b12-kube-api-access-smqdc\") pod \"openshift-apiserver-operator-796bbdcf4f-lqm45\" (UID: \"900a1b4b-fc56-4edd-b115-bbd76db83b12\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105494 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-client\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105512 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m8ns\" (UniqueName: \"kubernetes.io/projected/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-kube-api-access-9m8ns\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105530 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5nst\" (UniqueName: \"kubernetes.io/projected/200ac8c1-4bf1-4356-8091-9279fc08523f-kube-api-access-g5nst\") pod \"cluster-samples-operator-665b6dd947-pv5wp\" (UID: \"200ac8c1-4bf1-4356-8091-9279fc08523f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105547 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7jqr\" (UniqueName: \"kubernetes.io/projected/fd5128ab-6daa-4e73-b4ab-7a1ab98962fa-kube-api-access-l7jqr\") pod \"olm-operator-6b444d44fb-4tx6m\" (UID: \"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105567 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c900b720-71c1-41ac-bf6d-f554450c4d44-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105589 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-client-ca\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105613 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e951776-970a-49b9-8f34-b2fd129bbc39-serving-cert\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105658 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b49f037a-e7ec-45ef-846b-79ab549adb90-serving-cert\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105692 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/58dc7f0b-2626-478d-a541-511adc47db56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105716 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cafc19d0-a511-4cea-bd92-c28a18224e9f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vbzh7\" (UID: \"cafc19d0-a511-4cea-bd92-c28a18224e9f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105737 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjzjb\" (UniqueName: \"kubernetes.io/projected/c900b720-71c1-41ac-bf6d-f554450c4d44-kube-api-access-tjzjb\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105758 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs7fm\" (UniqueName: \"kubernetes.io/projected/4e951776-970a-49b9-8f34-b2fd129bbc39-kube-api-access-bs7fm\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105781 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9b15f63-53a9-40d3-940a-fe8640ebecab-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qrsz4\" (UID: \"b9b15f63-53a9-40d3-940a-fe8640ebecab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105800 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-audit-dir\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105821 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-serving-cert\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105847 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvkqk\" (UniqueName: \"kubernetes.io/projected/58dc7f0b-2626-478d-a541-511adc47db56-kube-api-access-zvkqk\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105872 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd2bb95c-fa68-47b4-bb37-1f1724773a74-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105894 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/58dc7f0b-2626-478d-a541-511adc47db56-images\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105914 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105934 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/343653aa-5654-4622-aa3e-045685abb471-serving-cert\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.105977 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106006 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/900a1b4b-fc56-4edd-b115-bbd76db83b12-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lqm45\" (UID: \"900a1b4b-fc56-4edd-b115-bbd76db83b12\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106034 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crm4z\" (UniqueName: \"kubernetes.io/projected/fd2bb95c-fa68-47b4-bb37-1f1724773a74-kube-api-access-crm4z\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106055 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5191a87d-ec94-4c7f-95eb-5898535d524b-proxy-tls\") pod \"machine-config-controller-84d6567774-25c8d\" (UID: \"5191a87d-ec94-4c7f-95eb-5898535d524b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106112 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-oauth-config\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106140 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106160 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a931e936-fb41-4b5a-b2dd-506cd7cec66c-tmpfs\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106180 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-98qwb\" (UID: \"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106242 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-proxy-tls\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106267 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106290 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e3860251-af35-4f12-81ce-91855c94d8c8-default-certificate\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106313 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3860251-af35-4f12-81ce-91855c94d8c8-metrics-certs\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106334 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-service-ca\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106357 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svdnq\" (UniqueName: \"kubernetes.io/projected/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-kube-api-access-svdnq\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-client-ca\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-config\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106419 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58a72b98-691c-4da3-a1df-5cdc793b9ff5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vtwcr\" (UID: \"58a72b98-691c-4da3-a1df-5cdc793b9ff5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106442 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fd2bb95c-fa68-47b4-bb37-1f1724773a74-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106463 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c900b720-71c1-41ac-bf6d-f554450c4d44-trusted-ca\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106483 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e3860251-af35-4f12-81ce-91855c94d8c8-stats-auth\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106504 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106537 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9fkv\" (UniqueName: \"kubernetes.io/projected/dacc1ea4-7062-46ad-a784-70537e92dc51-kube-api-access-r9fkv\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106632 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58a72b98-691c-4da3-a1df-5cdc793b9ff5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vtwcr\" (UID: \"58a72b98-691c-4da3-a1df-5cdc793b9ff5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106672 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-images\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106690 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/385471ac-e2f4-478f-a64e-8cb60d37cdd7-srv-cert\") pod \"catalog-operator-68c6474976-pcvjk\" (UID: \"385471ac-e2f4-478f-a64e-8cb60d37cdd7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106705 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fd5128ab-6daa-4e73-b4ab-7a1ab98962fa-srv-cert\") pod \"olm-operator-6b444d44fb-4tx6m\" (UID: \"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106719 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz24f\" (UniqueName: \"kubernetes.io/projected/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-kube-api-access-hz24f\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106748 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59rqv\" (UniqueName: \"kubernetes.io/projected/385471ac-e2f4-478f-a64e-8cb60d37cdd7-kube-api-access-59rqv\") pod \"catalog-operator-68c6474976-pcvjk\" (UID: \"385471ac-e2f4-478f-a64e-8cb60d37cdd7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-audit\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106783 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c900b720-71c1-41ac-bf6d-f554450c4d44-metrics-tls\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106797 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-oauth-serving-cert\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106811 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-config\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106869 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a175fd16-a4b3-4df9-88f7-3110c4d6c40f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wmvqq\" (UID: \"a175fd16-a4b3-4df9-88f7-3110c4d6c40f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106895 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v89fw\" (UniqueName: \"kubernetes.io/projected/36ff6a87-1405-42da-9fd2-5bf32fa6578d-kube-api-access-v89fw\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106943 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b15f63-53a9-40d3-940a-fe8640ebecab-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qrsz4\" (UID: \"b9b15f63-53a9-40d3-940a-fe8640ebecab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.106974 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a931e936-fb41-4b5a-b2dd-506cd7cec66c-apiservice-cert\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.107095 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tblgh\" (UniqueName: \"kubernetes.io/projected/a931e936-fb41-4b5a-b2dd-506cd7cec66c-kube-api-access-tblgh\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.107121 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6tk8\" (UniqueName: \"kubernetes.io/projected/5191a87d-ec94-4c7f-95eb-5898535d524b-kube-api-access-t6tk8\") pod \"machine-config-controller-84d6567774-25c8d\" (UID: \"5191a87d-ec94-4c7f-95eb-5898535d524b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.107135 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-serving-cert\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.107167 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/200ac8c1-4bf1-4356-8091-9279fc08523f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pv5wp\" (UID: \"200ac8c1-4bf1-4356-8091-9279fc08523f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.107198 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkd2q\" (UniqueName: \"kubernetes.io/projected/b49f037a-e7ec-45ef-846b-79ab549adb90-kube-api-access-hkd2q\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.107237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xg5n\" (UniqueName: \"kubernetes.io/projected/208f96f5-b245-4b8d-96d0-5210189d0f13-kube-api-access-7xg5n\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.107253 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l724\" (UniqueName: \"kubernetes.io/projected/1599f256-87d8-47f4-a6fb-6cea3b58b242-kube-api-access-5l724\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.107277 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd2bb95c-fa68-47b4-bb37-1f1724773a74-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.107624 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-audit-dir\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.107978 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36ff6a87-1405-42da-9fd2-5bf32fa6578d-auth-proxy-config\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.108343 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-dir\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.108529 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-config\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.109303 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-image-import-ca\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.110218 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3860251-af35-4f12-81ce-91855c94d8c8-service-ca-bundle\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.110330 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b49f037a-e7ec-45ef-846b-79ab549adb90-node-pullsecrets\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.111223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.113157 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b49f037a-e7ec-45ef-846b-79ab549adb90-audit-dir\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.113182 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.113490 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.113581 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/208f96f5-b245-4b8d-96d0-5210189d0f13-service-ca-bundle\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.113890 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.114164 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-etcd-serving-ca\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.114545 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.114774 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.115737 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.115801 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26865f5b-6d04-418c-9092-6e1853bc9c88-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4m75\" (UID: \"26865f5b-6d04-418c-9092-6e1853bc9c88\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.116349 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-serving-cert\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.116552 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208f96f5-b245-4b8d-96d0-5210189d0f13-serving-cert\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.116632 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-console-config\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.116836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b15f63-53a9-40d3-940a-fe8640ebecab-config\") pod \"kube-apiserver-operator-766d6c64bb-qrsz4\" (UID: \"b9b15f63-53a9-40d3-940a-fe8640ebecab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.117058 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-policies\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.130115 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd2bb95c-fa68-47b4-bb37-1f1724773a74-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.130583 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/208f96f5-b245-4b8d-96d0-5210189d0f13-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.130803 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900a1b4b-fc56-4edd-b115-bbd76db83b12-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lqm45\" (UID: \"900a1b4b-fc56-4edd-b115-bbd76db83b12\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.131081 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b49f037a-e7ec-45ef-846b-79ab549adb90-encryption-config\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.132460 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.132588 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-encryption-config\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.132616 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.132791 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.132984 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-trusted-ca-bundle\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.133145 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58a72b98-691c-4da3-a1df-5cdc793b9ff5-config\") pod \"kube-controller-manager-operator-78b949d7b-vtwcr\" (UID: \"58a72b98-691c-4da3-a1df-5cdc793b9ff5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.133279 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-audit-policies\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.133361 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208f96f5-b245-4b8d-96d0-5210189d0f13-config\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.133437 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b49f037a-e7ec-45ef-846b-79ab549adb90-etcd-client\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.134057 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fd5128ab-6daa-4e73-b4ab-7a1ab98962fa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4tx6m\" (UID: \"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.134704 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-serving-cert\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.136182 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26865f5b-6d04-418c-9092-6e1853bc9c88-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4m75\" (UID: \"26865f5b-6d04-418c-9092-6e1853bc9c88\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.136229 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.136242 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-client-ca\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.136554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ff6a87-1405-42da-9fd2-5bf32fa6578d-config\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.137331 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/58dc7f0b-2626-478d-a541-511adc47db56-images\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.137365 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.137774 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.137944 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58dc7f0b-2626-478d-a541-511adc47db56-config\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.138009 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.138488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-service-ca\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.139021 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cafc19d0-a511-4cea-bd92-c28a18224e9f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vbzh7\" (UID: \"cafc19d0-a511-4cea-bd92-c28a18224e9f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.139310 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.139492 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.139530 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e951776-970a-49b9-8f34-b2fd129bbc39-serving-cert\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.142372 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b49f037a-e7ec-45ef-846b-79ab549adb90-audit\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.143161 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-oauth-serving-cert\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.144721 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-config\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.145494 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-etcd-client\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.146834 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.147312 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cafc19d0-a511-4cea-bd92-c28a18224e9f-serving-cert\") pod \"openshift-config-operator-7777fb866f-vbzh7\" (UID: \"cafc19d0-a511-4cea-bd92-c28a18224e9f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.147835 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/385471ac-e2f4-478f-a64e-8cb60d37cdd7-profile-collector-cert\") pod \"catalog-operator-68c6474976-pcvjk\" (UID: \"385471ac-e2f4-478f-a64e-8cb60d37cdd7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.147992 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-client-ca\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.148010 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e3860251-af35-4f12-81ce-91855c94d8c8-default-certificate\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.148155 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/36ff6a87-1405-42da-9fd2-5bf32fa6578d-machine-approver-tls\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.148074 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-oauth-config\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.149187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b15f63-53a9-40d3-940a-fe8640ebecab-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qrsz4\" (UID: \"b9b15f63-53a9-40d3-940a-fe8640ebecab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.149702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/900a1b4b-fc56-4edd-b115-bbd76db83b12-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lqm45\" (UID: \"900a1b4b-fc56-4edd-b115-bbd76db83b12\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.150926 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.151133 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lvhwh"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.151351 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fd5128ab-6daa-4e73-b4ab-7a1ab98962fa-srv-cert\") pod \"olm-operator-6b444d44fb-4tx6m\" (UID: \"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.151401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-serving-cert\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.152835 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/385471ac-e2f4-478f-a64e-8cb60d37cdd7-srv-cert\") pod \"catalog-operator-68c6474976-pcvjk\" (UID: \"385471ac-e2f4-478f-a64e-8cb60d37cdd7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.153087 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/58dc7f0b-2626-478d-a541-511adc47db56-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.153118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b49f037a-e7ec-45ef-846b-79ab549adb90-serving-cert\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.153720 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.155289 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.158579 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-config\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.159580 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xpjqs"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.162006 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.162847 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58a72b98-691c-4da3-a1df-5cdc793b9ff5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vtwcr\" (UID: \"58a72b98-691c-4da3-a1df-5cdc793b9ff5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.163009 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.163060 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a175fd16-a4b3-4df9-88f7-3110c4d6c40f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wmvqq\" (UID: \"a175fd16-a4b3-4df9-88f7-3110c4d6c40f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.163546 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/200ac8c1-4bf1-4356-8091-9279fc08523f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pv5wp\" (UID: \"200ac8c1-4bf1-4356-8091-9279fc08523f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.166275 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-zkhz9"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.168562 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zkhz9" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.169068 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6sphz"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.171318 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.171619 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.171783 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zkhz9"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.172512 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3860251-af35-4f12-81ce-91855c94d8c8-metrics-certs\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.173321 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6sphz"] Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.176489 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.179842 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e3860251-af35-4f12-81ce-91855c94d8c8-stats-auth\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.191850 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.208395 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-images\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.208550 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a931e936-fb41-4b5a-b2dd-506cd7cec66c-apiservice-cert\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.208578 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tblgh\" (UniqueName: \"kubernetes.io/projected/a931e936-fb41-4b5a-b2dd-506cd7cec66c-kube-api-access-tblgh\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.208604 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6tk8\" (UniqueName: \"kubernetes.io/projected/5191a87d-ec94-4c7f-95eb-5898535d524b-kube-api-access-t6tk8\") pod \"machine-config-controller-84d6567774-25c8d\" (UID: \"5191a87d-ec94-4c7f-95eb-5898535d524b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.208627 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-serving-cert\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.208998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-98qwb\" (UID: \"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209068 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-ca\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209199 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdjxh\" (UniqueName: \"kubernetes.io/projected/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-kube-api-access-bdjxh\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209294 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a931e936-fb41-4b5a-b2dd-506cd7cec66c-webhook-cert\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209322 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/343653aa-5654-4622-aa3e-045685abb471-config\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw9gp\" (UniqueName: \"kubernetes.io/projected/343653aa-5654-4622-aa3e-045685abb471-kube-api-access-hw9gp\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209412 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvmx8\" (UniqueName: \"kubernetes.io/projected/3ed0ff62-2542-410b-ac29-904eb08bef16-kube-api-access-hvmx8\") pod \"control-plane-machine-set-operator-78cbb6b69f-vhwbd\" (UID: \"3ed0ff62-2542-410b-ac29-904eb08bef16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209440 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5191a87d-ec94-4c7f-95eb-5898535d524b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25c8d\" (UID: \"5191a87d-ec94-4c7f-95eb-5898535d524b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-config\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ed0ff62-2542-410b-ac29-904eb08bef16-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vhwbd\" (UID: \"3ed0ff62-2542-410b-ac29-904eb08bef16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-98qwb\" (UID: \"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209645 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-client\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209671 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m8ns\" (UniqueName: \"kubernetes.io/projected/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-kube-api-access-9m8ns\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209774 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209807 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/343653aa-5654-4622-aa3e-045685abb471-serving-cert\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209835 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5191a87d-ec94-4c7f-95eb-5898535d524b-proxy-tls\") pod \"machine-config-controller-84d6567774-25c8d\" (UID: \"5191a87d-ec94-4c7f-95eb-5898535d524b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209880 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a931e936-fb41-4b5a-b2dd-506cd7cec66c-tmpfs\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209907 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-proxy-tls\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-98qwb\" (UID: \"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.209952 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-service-ca\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.210661 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a931e936-fb41-4b5a-b2dd-506cd7cec66c-tmpfs\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.211021 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.211145 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5191a87d-ec94-4c7f-95eb-5898535d524b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25c8d\" (UID: \"5191a87d-ec94-4c7f-95eb-5898535d524b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.212438 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.231632 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.251947 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.271415 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.279171 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-images\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.291670 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.311418 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.332203 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.351749 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.371885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-proxy-tls\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.372913 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.391555 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.396867 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c900b720-71c1-41ac-bf6d-f554450c4d44-metrics-tls\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.420180 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.428050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c900b720-71c1-41ac-bf6d-f554450c4d44-trusted-ca\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.431540 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.451315 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.471909 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.523108 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.532438 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.560144 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.569432 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd2bb95c-fa68-47b4-bb37-1f1724773a74-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.573314 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.584220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-98qwb\" (UID: \"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.592509 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.612059 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.622761 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-98qwb\" (UID: \"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.632694 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.651429 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.671898 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.692008 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.713955 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.731447 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.751458 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.764862 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a931e936-fb41-4b5a-b2dd-506cd7cec66c-apiservice-cert\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.765204 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a931e936-fb41-4b5a-b2dd-506cd7cec66c-webhook-cert\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.772251 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.791108 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.812316 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.831911 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.852363 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.872084 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.891352 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.911933 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.932153 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.952320 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.966261 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5191a87d-ec94-4c7f-95eb-5898535d524b-proxy-tls\") pod \"machine-config-controller-84d6567774-25c8d\" (UID: \"5191a87d-ec94-4c7f-95eb-5898535d524b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.972267 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.990188 4727 request.go:700] Waited for 1.01851645s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&limit=500&resourceVersion=0 Nov 21 20:08:54 crc kubenswrapper[4727]: I1121 20:08:54.992438 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.013499 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.032433 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.052175 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.072376 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.092154 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.113129 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.132368 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.151815 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.173169 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.191822 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.209895 4727 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.209996 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-serving-cert podName:a606ebf1-1f2a-4c31-b2f8-43c13ef31c50 nodeName:}" failed. No retries permitted until 2025-11-21 20:08:55.70997781 +0000 UTC m=+140.896162854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-serving-cert") pod "etcd-operator-b45778765-7hlm6" (UID: "a606ebf1-1f2a-4c31-b2f8-43c13ef31c50") : failed to sync secret cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210229 4727 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210245 4727 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210285 4727 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210326 4727 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210266 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-service-ca podName:a606ebf1-1f2a-4c31-b2f8-43c13ef31c50 nodeName:}" failed. No retries permitted until 2025-11-21 20:08:55.710255769 +0000 UTC m=+140.896440813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-service-ca") pod "etcd-operator-b45778765-7hlm6" (UID: "a606ebf1-1f2a-4c31-b2f8-43c13ef31c50") : failed to sync configmap cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210346 4727 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210356 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/343653aa-5654-4622-aa3e-045685abb471-serving-cert podName:343653aa-5654-4622-aa3e-045685abb471 nodeName:}" failed. No retries permitted until 2025-11-21 20:08:55.710340181 +0000 UTC m=+140.896525235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/343653aa-5654-4622-aa3e-045685abb471-serving-cert") pod "service-ca-operator-777779d784-l7mfn" (UID: "343653aa-5654-4622-aa3e-045685abb471") : failed to sync secret cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210368 4727 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210371 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-ca podName:a606ebf1-1f2a-4c31-b2f8-43c13ef31c50 nodeName:}" failed. No retries permitted until 2025-11-21 20:08:55.710363902 +0000 UTC m=+140.896548966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-ca") pod "etcd-operator-b45778765-7hlm6" (UID: "a606ebf1-1f2a-4c31-b2f8-43c13ef31c50") : failed to sync configmap cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210418 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-config podName:a606ebf1-1f2a-4c31-b2f8-43c13ef31c50 nodeName:}" failed. No retries permitted until 2025-11-21 20:08:55.710409753 +0000 UTC m=+140.896594817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-config") pod "etcd-operator-b45778765-7hlm6" (UID: "a606ebf1-1f2a-4c31-b2f8-43c13ef31c50") : failed to sync configmap cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210432 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ed0ff62-2542-410b-ac29-904eb08bef16-control-plane-machine-set-operator-tls podName:3ed0ff62-2542-410b-ac29-904eb08bef16 nodeName:}" failed. No retries permitted until 2025-11-21 20:08:55.710425583 +0000 UTC m=+140.896610637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/3ed0ff62-2542-410b-ac29-904eb08bef16-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-vhwbd" (UID: "3ed0ff62-2542-410b-ac29-904eb08bef16") : failed to sync secret cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210485 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-client podName:a606ebf1-1f2a-4c31-b2f8-43c13ef31c50 nodeName:}" failed. No retries permitted until 2025-11-21 20:08:55.710452484 +0000 UTC m=+140.896637538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-client") pod "etcd-operator-b45778765-7hlm6" (UID: "a606ebf1-1f2a-4c31-b2f8-43c13ef31c50") : failed to sync secret cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.210488 4727 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: E1121 20:08:55.212111 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/343653aa-5654-4622-aa3e-045685abb471-config podName:343653aa-5654-4622-aa3e-045685abb471 nodeName:}" failed. No retries permitted until 2025-11-21 20:08:55.710566827 +0000 UTC m=+140.896751872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/343653aa-5654-4622-aa3e-045685abb471-config") pod "service-ca-operator-777779d784-l7mfn" (UID: "343653aa-5654-4622-aa3e-045685abb471") : failed to sync configmap cache: timed out waiting for the condition Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.212261 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.231622 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.251911 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.272522 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.291983 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.313400 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.331786 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.351606 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.373358 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.394691 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.412787 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.432298 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.451805 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.472482 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.493333 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.512472 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.532059 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.552730 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.584793 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.593523 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.632746 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.653198 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.672369 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.692542 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.711940 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.732010 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.737193 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/343653aa-5654-4622-aa3e-045685abb471-serving-cert\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.737299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-service-ca\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.737536 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-serving-cert\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.737668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-ca\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.737765 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/343653aa-5654-4622-aa3e-045685abb471-config\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.738027 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ed0ff62-2542-410b-ac29-904eb08bef16-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vhwbd\" (UID: \"3ed0ff62-2542-410b-ac29-904eb08bef16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.738073 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-config\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.738119 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-client\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.738670 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-service-ca\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.738816 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-ca\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.739291 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-config\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.739479 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/343653aa-5654-4622-aa3e-045685abb471-config\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.742585 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/343653aa-5654-4622-aa3e-045685abb471-serving-cert\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.743127 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-serving-cert\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.743359 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-etcd-client\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.743434 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ed0ff62-2542-410b-ac29-904eb08bef16-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vhwbd\" (UID: \"3ed0ff62-2542-410b-ac29-904eb08bef16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.770471 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjzjb\" (UniqueName: \"kubernetes.io/projected/c900b720-71c1-41ac-bf6d-f554450c4d44-kube-api-access-tjzjb\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.787915 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs7fm\" (UniqueName: \"kubernetes.io/projected/4e951776-970a-49b9-8f34-b2fd129bbc39-kube-api-access-bs7fm\") pod \"route-controller-manager-6576b87f9c-qjc79\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.813127 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9b15f63-53a9-40d3-940a-fe8640ebecab-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qrsz4\" (UID: \"b9b15f63-53a9-40d3-940a-fe8640ebecab\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.817791 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.832694 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4ndk\" (UniqueName: \"kubernetes.io/projected/e3860251-af35-4f12-81ce-91855c94d8c8-kube-api-access-q4ndk\") pod \"router-default-5444994796-dw7dg\" (UID: \"e3860251-af35-4f12-81ce-91855c94d8c8\") " pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.846005 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xktc\" (UniqueName: \"kubernetes.io/projected/cafc19d0-a511-4cea-bd92-c28a18224e9f-kube-api-access-9xktc\") pod \"openshift-config-operator-7777fb866f-vbzh7\" (UID: \"cafc19d0-a511-4cea-bd92-c28a18224e9f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.880186 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsc2p\" (UniqueName: \"kubernetes.io/projected/26865f5b-6d04-418c-9092-6e1853bc9c88-kube-api-access-tsc2p\") pod \"openshift-controller-manager-operator-756b6f6bc6-l4m75\" (UID: \"26865f5b-6d04-418c-9092-6e1853bc9c88\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.901193 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.903576 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvkqk\" (UniqueName: \"kubernetes.io/projected/58dc7f0b-2626-478d-a541-511adc47db56-kube-api-access-zvkqk\") pod \"machine-api-operator-5694c8668f-fwqq7\" (UID: \"58dc7f0b-2626-478d-a541-511adc47db56\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.911994 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmsg4\" (UniqueName: \"kubernetes.io/projected/a175fd16-a4b3-4df9-88f7-3110c4d6c40f-kube-api-access-zmsg4\") pod \"package-server-manager-789f6589d5-wmvqq\" (UID: \"a175fd16-a4b3-4df9-88f7-3110c4d6c40f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.939092 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c900b720-71c1-41ac-bf6d-f554450c4d44-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ggxjh\" (UID: \"c900b720-71c1-41ac-bf6d-f554450c4d44\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.960374 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smqdc\" (UniqueName: \"kubernetes.io/projected/900a1b4b-fc56-4edd-b115-bbd76db83b12-kube-api-access-smqdc\") pod \"openshift-apiserver-operator-796bbdcf4f-lqm45\" (UID: \"900a1b4b-fc56-4edd-b115-bbd76db83b12\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.971685 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5nst\" (UniqueName: \"kubernetes.io/projected/200ac8c1-4bf1-4356-8091-9279fc08523f-kube-api-access-g5nst\") pod \"cluster-samples-operator-665b6dd947-pv5wp\" (UID: \"200ac8c1-4bf1-4356-8091-9279fc08523f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.992137 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.994591 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" Nov 21 20:08:55 crc kubenswrapper[4727]: I1121 20:08:55.995674 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7jqr\" (UniqueName: \"kubernetes.io/projected/fd5128ab-6daa-4e73-b4ab-7a1ab98962fa-kube-api-access-l7jqr\") pod \"olm-operator-6b444d44fb-4tx6m\" (UID: \"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.001439 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.010390 4727 request.go:700] Waited for 1.872851997s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.011298 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w8xp\" (UniqueName: \"kubernetes.io/projected/3f3303e9-97ef-4e9b-9dc0-076066682c43-kube-api-access-5w8xp\") pod \"downloads-7954f5f757-nzpzh\" (UID: \"3f3303e9-97ef-4e9b-9dc0-076066682c43\") " pod="openshift-console/downloads-7954f5f757-nzpzh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.020653 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.034338 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.039153 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9fkv\" (UniqueName: \"kubernetes.io/projected/dacc1ea4-7062-46ad-a784-70537e92dc51-kube-api-access-r9fkv\") pod \"console-f9d7485db-fj2k4\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:56 crc kubenswrapper[4727]: W1121 20:08:56.043320 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3860251_af35_4f12_81ce_91855c94d8c8.slice/crio-82270ff54fc0b05831d5343e54083a7b2467191e599cfd39f8255cff30768f05 WatchSource:0}: Error finding container 82270ff54fc0b05831d5343e54083a7b2467191e599cfd39f8255cff30768f05: Status 404 returned error can't find the container with id 82270ff54fc0b05831d5343e54083a7b2467191e599cfd39f8255cff30768f05 Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.050844 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58a72b98-691c-4da3-a1df-5cdc793b9ff5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vtwcr\" (UID: \"58a72b98-691c-4da3-a1df-5cdc793b9ff5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.055090 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.056978 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.061572 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.070366 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v89fw\" (UniqueName: \"kubernetes.io/projected/36ff6a87-1405-42da-9fd2-5bf32fa6578d-kube-api-access-v89fw\") pod \"machine-approver-56656f9798-p2c9z\" (UID: \"36ff6a87-1405-42da-9fd2-5bf32fa6578d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.070945 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.094012 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crm4z\" (UniqueName: \"kubernetes.io/projected/fd2bb95c-fa68-47b4-bb37-1f1724773a74-kube-api-access-crm4z\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.113250 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz24f\" (UniqueName: \"kubernetes.io/projected/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-kube-api-access-hz24f\") pod \"controller-manager-879f6c89f-hx72f\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.125680 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.129925 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59rqv\" (UniqueName: \"kubernetes.io/projected/385471ac-e2f4-478f-a64e-8cb60d37cdd7-kube-api-access-59rqv\") pod \"catalog-operator-68c6474976-pcvjk\" (UID: \"385471ac-e2f4-478f-a64e-8cb60d37cdd7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.136258 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.147170 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkd2q\" (UniqueName: \"kubernetes.io/projected/b49f037a-e7ec-45ef-846b-79ab549adb90-kube-api-access-hkd2q\") pod \"apiserver-76f77b778f-4855z\" (UID: \"b49f037a-e7ec-45ef-846b-79ab549adb90\") " pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.161127 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.161238 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.173036 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dw7dg" event={"ID":"e3860251-af35-4f12-81ce-91855c94d8c8","Type":"ContainerStarted","Data":"82270ff54fc0b05831d5343e54083a7b2467191e599cfd39f8255cff30768f05"} Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.177179 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nzpzh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.178048 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xg5n\" (UniqueName: \"kubernetes.io/projected/208f96f5-b245-4b8d-96d0-5210189d0f13-kube-api-access-7xg5n\") pod \"authentication-operator-69f744f599-hjqgw\" (UID: \"208f96f5-b245-4b8d-96d0-5210189d0f13\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.182662 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" event={"ID":"4e951776-970a-49b9-8f34-b2fd129bbc39","Type":"ContainerStarted","Data":"6c9deace7801855e85fe8430bec9029584a354fa9598cedc5f4ca91e4102cc1e"} Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.206940 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l724\" (UniqueName: \"kubernetes.io/projected/1599f256-87d8-47f4-a6fb-6cea3b58b242-kube-api-access-5l724\") pod \"oauth-openshift-558db77b4-zqbhr\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.216808 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fd2bb95c-fa68-47b4-bb37-1f1724773a74-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qwplk\" (UID: \"fd2bb95c-fa68-47b4-bb37-1f1724773a74\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.232128 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.234085 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svdnq\" (UniqueName: \"kubernetes.io/projected/54882e90-f639-4a95-ae1b-80f6fdbdaf5f-kube-api-access-svdnq\") pod \"apiserver-7bbb656c7d-jbvs8\" (UID: \"54882e90-f639-4a95-ae1b-80f6fdbdaf5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.239268 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fwqq7"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.254208 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.268179 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.272422 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.291803 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.309126 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.313501 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.316155 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.320928 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.325818 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.326491 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.333100 4727 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.354835 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.391523 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tblgh\" (UniqueName: \"kubernetes.io/projected/a931e936-fb41-4b5a-b2dd-506cd7cec66c-kube-api-access-tblgh\") pod \"packageserver-d55dfcdfc-cpgqd\" (UID: \"a931e936-fb41-4b5a-b2dd-506cd7cec66c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.411467 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6tk8\" (UniqueName: \"kubernetes.io/projected/5191a87d-ec94-4c7f-95eb-5898535d524b-kube-api-access-t6tk8\") pod \"machine-config-controller-84d6567774-25c8d\" (UID: \"5191a87d-ec94-4c7f-95eb-5898535d524b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.431089 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdjxh\" (UniqueName: \"kubernetes.io/projected/86100e7c-cbfb-46c4-812d-f1ad2eb51b11-kube-api-access-bdjxh\") pod \"machine-config-operator-74547568cd-lmjlx\" (UID: \"86100e7c-cbfb-46c4-812d-f1ad2eb51b11\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.438412 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.456806 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw9gp\" (UniqueName: \"kubernetes.io/projected/343653aa-5654-4622-aa3e-045685abb471-kube-api-access-hw9gp\") pod \"service-ca-operator-777779d784-l7mfn\" (UID: \"343653aa-5654-4622-aa3e-045685abb471\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.465306 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvmx8\" (UniqueName: \"kubernetes.io/projected/3ed0ff62-2542-410b-ac29-904eb08bef16-kube-api-access-hvmx8\") pod \"control-plane-machine-set-operator-78cbb6b69f-vhwbd\" (UID: \"3ed0ff62-2542-410b-ac29-904eb08bef16\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" Nov 21 20:08:56 crc kubenswrapper[4727]: W1121 20:08:56.466861 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36ff6a87_1405_42da_9fd2_5bf32fa6578d.slice/crio-24231dca933b5b3a001e8e3895e40d11d751530d3a7efa899221a9d128c8f6a2 WatchSource:0}: Error finding container 24231dca933b5b3a001e8e3895e40d11d751530d3a7efa899221a9d128c8f6a2: Status 404 returned error can't find the container with id 24231dca933b5b3a001e8e3895e40d11d751530d3a7efa899221a9d128c8f6a2 Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.476397 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.488654 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m8ns\" (UniqueName: \"kubernetes.io/projected/a606ebf1-1f2a-4c31-b2f8-43c13ef31c50-kube-api-access-9m8ns\") pod \"etcd-operator-b45778765-7hlm6\" (UID: \"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.491816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.512824 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-98qwb\" (UID: \"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552530 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv9lx\" (UniqueName: \"kubernetes.io/projected/82591a0f-a7cf-4745-9b64-637625f63662-kube-api-access-cv9lx\") pod \"migrator-59844c95c7-8vpg5\" (UID: \"82591a0f-a7cf-4745-9b64-637625f63662\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552570 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql77s\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-kube-api-access-ql77s\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552589 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pjhd\" (UniqueName: \"kubernetes.io/projected/67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7-kube-api-access-8pjhd\") pod \"service-ca-9c57cc56f-42c64\" (UID: \"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552618 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-certificates\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552661 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57ncs\" (UniqueName: \"kubernetes.io/projected/e759f1df-d65f-4297-b178-697608983c6f-kube-api-access-57ncs\") pod \"dns-operator-744455d44c-97ctf\" (UID: \"e759f1df-d65f-4297-b178-697608983c6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552691 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7-signing-key\") pod \"service-ca-9c57cc56f-42c64\" (UID: \"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552708 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-tls\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552732 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fn2j6\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4fffc06f-c43a-467f-9159-efdae68bddfb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jq87g\" (UID: \"4fffc06f-c43a-467f-9159-efdae68bddfb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552793 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4c4\" (UniqueName: \"kubernetes.io/projected/4fffc06f-c43a-467f-9159-efdae68bddfb-kube-api-access-4j4c4\") pod \"multus-admission-controller-857f4d67dd-jq87g\" (UID: \"4fffc06f-c43a-467f-9159-efdae68bddfb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552813 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-config-volume\") pod \"collect-profiles-29395920-d94jg\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552834 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a12de68-3f7e-4465-8325-db0a17c35cf0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qms5m\" (UID: \"7a12de68-3f7e-4465-8325-db0a17c35cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552852 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a65a43f-bd75-428b-8709-73364367d020-serving-cert\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36c9326c-5a9b-4e19-a0a7-047289e45c01-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552896 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a12de68-3f7e-4465-8325-db0a17c35cf0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qms5m\" (UID: \"7a12de68-3f7e-4465-8325-db0a17c35cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552911 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlw7g\" (UniqueName: \"kubernetes.io/projected/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-kube-api-access-jlw7g\") pod \"collect-profiles-29395920-d94jg\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552935 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-secret-volume\") pod \"collect-profiles-29395920-d94jg\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552967 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7-signing-cabundle\") pod \"service-ca-9c57cc56f-42c64\" (UID: \"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.552999 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a65a43f-bd75-428b-8709-73364367d020-trusted-ca\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.553035 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-trusted-ca\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.553065 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.553082 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36c9326c-5a9b-4e19-a0a7-047289e45c01-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.553107 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-bound-sa-token\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.553125 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fn2j6\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.553141 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8d8h\" (UniqueName: \"kubernetes.io/projected/8a65a43f-bd75-428b-8709-73364367d020-kube-api-access-h8d8h\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.553168 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx2fd\" (UniqueName: \"kubernetes.io/projected/42408e00-5bcd-4405-82fe-851c9b62b149-kube-api-access-vx2fd\") pod \"marketplace-operator-79b997595-fn2j6\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.553198 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxt4b\" (UniqueName: \"kubernetes.io/projected/7a12de68-3f7e-4465-8325-db0a17c35cf0-kube-api-access-bxt4b\") pod \"kube-storage-version-migrator-operator-b67b599dd-qms5m\" (UID: \"7a12de68-3f7e-4465-8325-db0a17c35cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.553222 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a65a43f-bd75-428b-8709-73364367d020-config\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.553260 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e759f1df-d65f-4297-b178-697608983c6f-metrics-tls\") pod \"dns-operator-744455d44c-97ctf\" (UID: \"e759f1df-d65f-4297-b178-697608983c6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" Nov 21 20:08:56 crc kubenswrapper[4727]: E1121 20:08:56.556459 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:57.056431056 +0000 UTC m=+142.242616100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.573052 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.573125 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.582515 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.639364 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" Nov 21 20:08:56 crc kubenswrapper[4727]: W1121 20:08:56.639718 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda175fd16_a4b3_4df9_88f7_3110c4d6c40f.slice/crio-18928eed7ef7dc5c2d57ba645776c2afc8624019aa8e4f7875faff4d8b54ce03 WatchSource:0}: Error finding container 18928eed7ef7dc5c2d57ba645776c2afc8624019aa8e4f7875faff4d8b54ce03: Status 404 returned error can't find the container with id 18928eed7ef7dc5c2d57ba645776c2afc8624019aa8e4f7875faff4d8b54ce03 Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.654389 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.654577 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57ncs\" (UniqueName: \"kubernetes.io/projected/e759f1df-d65f-4297-b178-697608983c6f-kube-api-access-57ncs\") pod \"dns-operator-744455d44c-97ctf\" (UID: \"e759f1df-d65f-4297-b178-697608983c6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.654726 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7-signing-key\") pod \"service-ca-9c57cc56f-42c64\" (UID: \"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.654754 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/80d0d1a3-fc38-48aa-9adf-2f4a71e70c91-certs\") pod \"machine-config-server-g4rwm\" (UID: \"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91\") " pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.654805 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-tls\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.654841 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-mountpoint-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.654872 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fn2j6\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.654888 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4fffc06f-c43a-467f-9159-efdae68bddfb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jq87g\" (UID: \"4fffc06f-c43a-467f-9159-efdae68bddfb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.654904 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac14f084-6765-4f2c-badc-b913eb4187b5-config-volume\") pod \"dns-default-lvhwh\" (UID: \"ac14f084-6765-4f2c-badc-b913eb4187b5\") " pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.655070 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j4c4\" (UniqueName: \"kubernetes.io/projected/4fffc06f-c43a-467f-9159-efdae68bddfb-kube-api-access-4j4c4\") pod \"multus-admission-controller-857f4d67dd-jq87g\" (UID: \"4fffc06f-c43a-467f-9159-efdae68bddfb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.655095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-config-volume\") pod \"collect-profiles-29395920-d94jg\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.655113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/80d0d1a3-fc38-48aa-9adf-2f4a71e70c91-node-bootstrap-token\") pod \"machine-config-server-g4rwm\" (UID: \"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91\") " pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.655141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a12de68-3f7e-4465-8325-db0a17c35cf0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qms5m\" (UID: \"7a12de68-3f7e-4465-8325-db0a17c35cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.655157 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-registration-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.655210 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a65a43f-bd75-428b-8709-73364367d020-serving-cert\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.655248 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36c9326c-5a9b-4e19-a0a7-047289e45c01-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.655265 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a12de68-3f7e-4465-8325-db0a17c35cf0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qms5m\" (UID: \"7a12de68-3f7e-4465-8325-db0a17c35cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.655283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlw7g\" (UniqueName: \"kubernetes.io/projected/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-kube-api-access-jlw7g\") pod \"collect-profiles-29395920-d94jg\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.655318 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-secret-volume\") pod \"collect-profiles-29395920-d94jg\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.675084 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.675413 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a12de68-3f7e-4465-8325-db0a17c35cf0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qms5m\" (UID: \"7a12de68-3f7e-4465-8325-db0a17c35cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.675766 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fn2j6\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.676036 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-secret-volume\") pod \"collect-profiles-29395920-d94jg\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.676322 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4fffc06f-c43a-467f-9159-efdae68bddfb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jq87g\" (UID: \"4fffc06f-c43a-467f-9159-efdae68bddfb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.676499 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7-signing-key\") pod \"service-ca-9c57cc56f-42c64\" (UID: \"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.681671 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36c9326c-5a9b-4e19-a0a7-047289e45c01-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.689890 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh"] Nov 21 20:08:56 crc kubenswrapper[4727]: E1121 20:08:56.693190 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:57.19316012 +0000 UTC m=+142.379345164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.695714 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7-signing-cabundle\") pod \"service-ca-9c57cc56f-42c64\" (UID: \"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.697116 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlw7g\" (UniqueName: \"kubernetes.io/projected/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-kube-api-access-jlw7g\") pod \"collect-profiles-29395920-d94jg\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.699885 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.700469 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7-signing-cabundle\") pod \"service-ca-9c57cc56f-42c64\" (UID: \"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.702444 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-config-volume\") pod \"collect-profiles-29395920-d94jg\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.702864 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a65a43f-bd75-428b-8709-73364367d020-trusted-ca\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.709695 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkdh\" (UniqueName: \"kubernetes.io/projected/ac14f084-6765-4f2c-badc-b913eb4187b5-kube-api-access-zbkdh\") pod \"dns-default-lvhwh\" (UID: \"ac14f084-6765-4f2c-badc-b913eb4187b5\") " pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.711064 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.711646 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a65a43f-bd75-428b-8709-73364367d020-trusted-ca\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.711794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a12de68-3f7e-4465-8325-db0a17c35cf0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qms5m\" (UID: \"7a12de68-3f7e-4465-8325-db0a17c35cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.712161 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-trusted-ca\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.712315 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.712358 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36c9326c-5a9b-4e19-a0a7-047289e45c01-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.712734 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36c9326c-5a9b-4e19-a0a7-047289e45c01-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: E1121 20:08:56.712830 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:57.21281062 +0000 UTC m=+142.398995664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.718793 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-trusted-ca\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.719419 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-bound-sa-token\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.720621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fn2j6\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.721034 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8d8h\" (UniqueName: \"kubernetes.io/projected/8a65a43f-bd75-428b-8709-73364367d020-kube-api-access-h8d8h\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.721316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx2fd\" (UniqueName: \"kubernetes.io/projected/42408e00-5bcd-4405-82fe-851c9b62b149-kube-api-access-vx2fd\") pod \"marketplace-operator-79b997595-fn2j6\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.723634 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxt4b\" (UniqueName: \"kubernetes.io/projected/7a12de68-3f7e-4465-8325-db0a17c35cf0-kube-api-access-bxt4b\") pod \"kube-storage-version-migrator-operator-b67b599dd-qms5m\" (UID: \"7a12de68-3f7e-4465-8325-db0a17c35cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.727027 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a65a43f-bd75-428b-8709-73364367d020-config\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.727074 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac14f084-6765-4f2c-badc-b913eb4187b5-metrics-tls\") pod \"dns-default-lvhwh\" (UID: \"ac14f084-6765-4f2c-badc-b913eb4187b5\") " pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.727131 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6zv\" (UniqueName: \"kubernetes.io/projected/80d0d1a3-fc38-48aa-9adf-2f4a71e70c91-kube-api-access-8p6zv\") pod \"machine-config-server-g4rwm\" (UID: \"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91\") " pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.727214 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk9nc\" (UniqueName: \"kubernetes.io/projected/19075424-77c6-47ff-93fa-d54902e69f7c-kube-api-access-gk9nc\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.727254 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-csi-data-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.727306 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-plugins-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.727332 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4p5\" (UniqueName: \"kubernetes.io/projected/57aff7f6-8b72-4491-b670-57c1c48e93e0-kube-api-access-9w4p5\") pod \"ingress-canary-zkhz9\" (UID: \"57aff7f6-8b72-4491-b670-57c1c48e93e0\") " pod="openshift-ingress-canary/ingress-canary-zkhz9" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.727352 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57aff7f6-8b72-4491-b670-57c1c48e93e0-cert\") pod \"ingress-canary-zkhz9\" (UID: \"57aff7f6-8b72-4491-b670-57c1c48e93e0\") " pod="openshift-ingress-canary/ingress-canary-zkhz9" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.732838 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fn2j6\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.733627 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a65a43f-bd75-428b-8709-73364367d020-config\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.737638 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.745035 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4855z"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.745650 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-socket-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.745719 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e759f1df-d65f-4297-b178-697608983c6f-metrics-tls\") pod \"dns-operator-744455d44c-97ctf\" (UID: \"e759f1df-d65f-4297-b178-697608983c6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.745758 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv9lx\" (UniqueName: \"kubernetes.io/projected/82591a0f-a7cf-4745-9b64-637625f63662-kube-api-access-cv9lx\") pod \"migrator-59844c95c7-8vpg5\" (UID: \"82591a0f-a7cf-4745-9b64-637625f63662\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.745817 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql77s\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-kube-api-access-ql77s\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.745852 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pjhd\" (UniqueName: \"kubernetes.io/projected/67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7-kube-api-access-8pjhd\") pod \"service-ca-9c57cc56f-42c64\" (UID: \"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.745938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-certificates\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.749460 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.750886 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.755153 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e759f1df-d65f-4297-b178-697608983c6f-metrics-tls\") pod \"dns-operator-744455d44c-97ctf\" (UID: \"e759f1df-d65f-4297-b178-697608983c6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.761414 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.762577 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57ncs\" (UniqueName: \"kubernetes.io/projected/e759f1df-d65f-4297-b178-697608983c6f-kube-api-access-57ncs\") pod \"dns-operator-744455d44c-97ctf\" (UID: \"e759f1df-d65f-4297-b178-697608983c6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.763195 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-certificates\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.769593 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-tls\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.784930 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.785921 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a65a43f-bd75-428b-8709-73364367d020-serving-cert\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.804217 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j4c4\" (UniqueName: \"kubernetes.io/projected/4fffc06f-c43a-467f-9159-efdae68bddfb-kube-api-access-4j4c4\") pod \"multus-admission-controller-857f4d67dd-jq87g\" (UID: \"4fffc06f-c43a-467f-9159-efdae68bddfb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.808038 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-bound-sa-token\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.815379 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8d8h\" (UniqueName: \"kubernetes.io/projected/8a65a43f-bd75-428b-8709-73364367d020-kube-api-access-h8d8h\") pod \"console-operator-58897d9998-xpjqs\" (UID: \"8a65a43f-bd75-428b-8709-73364367d020\") " pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.815699 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx2fd\" (UniqueName: \"kubernetes.io/projected/42408e00-5bcd-4405-82fe-851c9b62b149-kube-api-access-vx2fd\") pod \"marketplace-operator-79b997595-fn2j6\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.823266 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nzpzh"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.824322 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fj2k4"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.830341 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hx72f"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.833905 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.842302 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp"] Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.846837 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847137 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac14f084-6765-4f2c-badc-b913eb4187b5-metrics-tls\") pod \"dns-default-lvhwh\" (UID: \"ac14f084-6765-4f2c-badc-b913eb4187b5\") " pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6zv\" (UniqueName: \"kubernetes.io/projected/80d0d1a3-fc38-48aa-9adf-2f4a71e70c91-kube-api-access-8p6zv\") pod \"machine-config-server-g4rwm\" (UID: \"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91\") " pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847210 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk9nc\" (UniqueName: \"kubernetes.io/projected/19075424-77c6-47ff-93fa-d54902e69f7c-kube-api-access-gk9nc\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847235 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-csi-data-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847256 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-plugins-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847285 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4p5\" (UniqueName: \"kubernetes.io/projected/57aff7f6-8b72-4491-b670-57c1c48e93e0-kube-api-access-9w4p5\") pod \"ingress-canary-zkhz9\" (UID: \"57aff7f6-8b72-4491-b670-57c1c48e93e0\") " pod="openshift-ingress-canary/ingress-canary-zkhz9" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847309 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-socket-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847330 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57aff7f6-8b72-4491-b670-57c1c48e93e0-cert\") pod \"ingress-canary-zkhz9\" (UID: \"57aff7f6-8b72-4491-b670-57c1c48e93e0\") " pod="openshift-ingress-canary/ingress-canary-zkhz9" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-csi-data-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847594 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/80d0d1a3-fc38-48aa-9adf-2f4a71e70c91-certs\") pod \"machine-config-server-g4rwm\" (UID: \"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91\") " pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847636 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-mountpoint-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847669 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac14f084-6765-4f2c-badc-b913eb4187b5-config-volume\") pod \"dns-default-lvhwh\" (UID: \"ac14f084-6765-4f2c-badc-b913eb4187b5\") " pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847701 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-registration-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/80d0d1a3-fc38-48aa-9adf-2f4a71e70c91-node-bootstrap-token\") pod \"machine-config-server-g4rwm\" (UID: \"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91\") " pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.847779 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbkdh\" (UniqueName: \"kubernetes.io/projected/ac14f084-6765-4f2c-badc-b913eb4187b5-kube-api-access-zbkdh\") pod \"dns-default-lvhwh\" (UID: \"ac14f084-6765-4f2c-badc-b913eb4187b5\") " pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:56 crc kubenswrapper[4727]: E1121 20:08:56.848151 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:57.348125572 +0000 UTC m=+142.534310616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.848325 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-mountpoint-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.848323 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-plugins-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.848380 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-socket-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.848924 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19075424-77c6-47ff-93fa-d54902e69f7c-registration-dir\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.849126 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac14f084-6765-4f2c-badc-b913eb4187b5-config-volume\") pod \"dns-default-lvhwh\" (UID: \"ac14f084-6765-4f2c-badc-b913eb4187b5\") " pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.851405 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac14f084-6765-4f2c-badc-b913eb4187b5-metrics-tls\") pod \"dns-default-lvhwh\" (UID: \"ac14f084-6765-4f2c-badc-b913eb4187b5\") " pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.853091 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/80d0d1a3-fc38-48aa-9adf-2f4a71e70c91-certs\") pod \"machine-config-server-g4rwm\" (UID: \"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91\") " pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.853309 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57aff7f6-8b72-4491-b670-57c1c48e93e0-cert\") pod \"ingress-canary-zkhz9\" (UID: \"57aff7f6-8b72-4491-b670-57c1c48e93e0\") " pod="openshift-ingress-canary/ingress-canary-zkhz9" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.854789 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/80d0d1a3-fc38-48aa-9adf-2f4a71e70c91-node-bootstrap-token\") pod \"machine-config-server-g4rwm\" (UID: \"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91\") " pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.860218 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxt4b\" (UniqueName: \"kubernetes.io/projected/7a12de68-3f7e-4465-8325-db0a17c35cf0-kube-api-access-bxt4b\") pod \"kube-storage-version-migrator-operator-b67b599dd-qms5m\" (UID: \"7a12de68-3f7e-4465-8325-db0a17c35cf0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.867491 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pjhd\" (UniqueName: \"kubernetes.io/projected/67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7-kube-api-access-8pjhd\") pod \"service-ca-9c57cc56f-42c64\" (UID: \"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7\") " pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.888774 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv9lx\" (UniqueName: \"kubernetes.io/projected/82591a0f-a7cf-4745-9b64-637625f63662-kube-api-access-cv9lx\") pod \"migrator-59844c95c7-8vpg5\" (UID: \"82591a0f-a7cf-4745-9b64-637625f63662\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.919327 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql77s\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-kube-api-access-ql77s\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.946553 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.949031 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:56 crc kubenswrapper[4727]: E1121 20:08:56.949465 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:57.449450052 +0000 UTC m=+142.635635096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.952310 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbkdh\" (UniqueName: \"kubernetes.io/projected/ac14f084-6765-4f2c-badc-b913eb4187b5-kube-api-access-zbkdh\") pod \"dns-default-lvhwh\" (UID: \"ac14f084-6765-4f2c-badc-b913eb4187b5\") " pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.972359 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.989394 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6zv\" (UniqueName: \"kubernetes.io/projected/80d0d1a3-fc38-48aa-9adf-2f4a71e70c91-kube-api-access-8p6zv\") pod \"machine-config-server-g4rwm\" (UID: \"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91\") " pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.994281 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-42c64" Nov 21 20:08:56 crc kubenswrapper[4727]: I1121 20:08:56.998114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4p5\" (UniqueName: \"kubernetes.io/projected/57aff7f6-8b72-4491-b670-57c1c48e93e0-kube-api-access-9w4p5\") pod \"ingress-canary-zkhz9\" (UID: \"57aff7f6-8b72-4491-b670-57c1c48e93e0\") " pod="openshift-ingress-canary/ingress-canary-zkhz9" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.013458 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.016289 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk9nc\" (UniqueName: \"kubernetes.io/projected/19075424-77c6-47ff-93fa-d54902e69f7c-kube-api-access-gk9nc\") pod \"csi-hostpathplugin-6sphz\" (UID: \"19075424-77c6-47ff-93fa-d54902e69f7c\") " pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.018628 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.031389 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.035166 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hjqgw"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.040121 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.050625 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:57 crc kubenswrapper[4727]: E1121 20:08:57.051238 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:57.551221034 +0000 UTC m=+142.737406078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.064997 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zqbhr"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.085579 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.096366 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lvhwh" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.105512 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-g4rwm" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.116876 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zkhz9" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.140504 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6sphz" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.152730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:57 crc kubenswrapper[4727]: E1121 20:08:57.153101 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:57.65309003 +0000 UTC m=+142.839275064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:57 crc kubenswrapper[4727]: W1121 20:08:57.172876 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod208f96f5_b245_4b8d_96d0_5210189d0f13.slice/crio-f744921390b4c39cd991da13a9c114d1f11e6db6c02ae2e3ed5ff82845f8a950 WatchSource:0}: Error finding container f744921390b4c39cd991da13a9c114d1f11e6db6c02ae2e3ed5ff82845f8a950: Status 404 returned error can't find the container with id f744921390b4c39cd991da13a9c114d1f11e6db6c02ae2e3ed5ff82845f8a950 Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.194916 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" event={"ID":"58a72b98-691c-4da3-a1df-5cdc793b9ff5","Type":"ContainerStarted","Data":"88626e2d75db752d581cec7fa4221df0260a387194c9a3e0deca886a78d83715"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.197680 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" event={"ID":"1599f256-87d8-47f4-a6fb-6cea3b58b242","Type":"ContainerStarted","Data":"2cbde6e30d5ee8a1bbb0d07695b4e4389d6726fecd9f40b63aa1209742ded9e9"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.207539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" event={"ID":"fd2bb95c-fa68-47b4-bb37-1f1724773a74","Type":"ContainerStarted","Data":"b6776b73e92bd4dabe8a698644e266e34538103865a315f7df061899f6f97785"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.208613 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" event={"ID":"900a1b4b-fc56-4edd-b115-bbd76db83b12","Type":"ContainerStarted","Data":"43682ba624e21a686714b278fbf8498716ccaf161402524b9e111f9cb3da2f81"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.213138 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4855z" event={"ID":"b49f037a-e7ec-45ef-846b-79ab549adb90","Type":"ContainerStarted","Data":"5a8d87a3117668eead6b482a8c25434343afe8ace70692e6a29d5f185d2e73fe"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.222615 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fj2k4" event={"ID":"dacc1ea4-7062-46ad-a784-70537e92dc51","Type":"ContainerStarted","Data":"233e0e1e4bfa2dcebc533459f09fea730eb2a4ae2ad3ffab0239a3d785ad7f9e"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.224828 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" event={"ID":"72783796-a3ab-4a34-9b9e-b4df16dd1cc2","Type":"ContainerStarted","Data":"1f2862ce91ec1d546f928bd33317761615fbde4838712663b60a8eaaaf76db42"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.226118 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" event={"ID":"58dc7f0b-2626-478d-a541-511adc47db56","Type":"ContainerStarted","Data":"ab9fff5cd702b9800a6f38167c5fe387e4f85395ca7141ed8840f21c9b5f4db0"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.226142 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" event={"ID":"58dc7f0b-2626-478d-a541-511adc47db56","Type":"ContainerStarted","Data":"ba2c22555e0ca2873adc4fa6370efaa87fa6ececdb82f9463a61fa6562592954"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.226869 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" event={"ID":"208f96f5-b245-4b8d-96d0-5210189d0f13","Type":"ContainerStarted","Data":"f744921390b4c39cd991da13a9c114d1f11e6db6c02ae2e3ed5ff82845f8a950"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.227637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" event={"ID":"c900b720-71c1-41ac-bf6d-f554450c4d44","Type":"ContainerStarted","Data":"e1c83bfb364aaa7baa8cb2cd9edb594786258a3129c27fa91df64ee5525a1ddd"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.229624 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" event={"ID":"26865f5b-6d04-418c-9092-6e1853bc9c88","Type":"ContainerStarted","Data":"718512f4a7a0768e1c3dd3436d77c3b0b3f1f65211d4565115012236fe255f7d"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.229650 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" event={"ID":"26865f5b-6d04-418c-9092-6e1853bc9c88","Type":"ContainerStarted","Data":"892f7c271d7c6260c3c319ea2140c1585e45669b8ff80fa47ac8ca21e809fad6"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.245144 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dw7dg" event={"ID":"e3860251-af35-4f12-81ce-91855c94d8c8","Type":"ContainerStarted","Data":"3afcc40701e45ac7757cb0ceec39564a5c4cbc0e5623ec6b6e7d291167ab35e1"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.255373 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:57 crc kubenswrapper[4727]: E1121 20:08:57.255820 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:57.7558043 +0000 UTC m=+142.941989344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.270622 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nzpzh" event={"ID":"3f3303e9-97ef-4e9b-9dc0-076066682c43","Type":"ContainerStarted","Data":"7a9be005db1f22ac6571e25ce67f7723c8a6260b5f31039a0a51a8325a8cdfed"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.270986 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.287255 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.289460 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" event={"ID":"36ff6a87-1405-42da-9fd2-5bf32fa6578d","Type":"ContainerStarted","Data":"5bca35661a37ddbaf6442426c464da9b7a5c63c7efe0eba157851dbd7cbeb5a7"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.289499 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" event={"ID":"36ff6a87-1405-42da-9fd2-5bf32fa6578d","Type":"ContainerStarted","Data":"24231dca933b5b3a001e8e3895e40d11d751530d3a7efa899221a9d128c8f6a2"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.300125 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.313483 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" event={"ID":"a175fd16-a4b3-4df9-88f7-3110c4d6c40f","Type":"ContainerStarted","Data":"18928eed7ef7dc5c2d57ba645776c2afc8624019aa8e4f7875faff4d8b54ce03"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.321561 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" event={"ID":"4e951776-970a-49b9-8f34-b2fd129bbc39","Type":"ContainerStarted","Data":"dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.323075 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.324657 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" event={"ID":"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa","Type":"ContainerStarted","Data":"ad0095942adf987e8a24248e8c55fad3378fb4e4a618c656c2358a668d46463f"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.324691 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" event={"ID":"fd5128ab-6daa-4e73-b4ab-7a1ab98962fa","Type":"ContainerStarted","Data":"f371ac32f92f8d6208dec2a1bcdc7b6e34002d88ddf4673369b8e2d1cbba3967"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.325267 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.329441 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" event={"ID":"cafc19d0-a511-4cea-bd92-c28a18224e9f","Type":"ContainerStarted","Data":"1ad59078d5d94fee7e8229eec102fa01610f25358f015f20af7a976ddf6b4b4e"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.332925 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" event={"ID":"b9b15f63-53a9-40d3-940a-fe8640ebecab","Type":"ContainerStarted","Data":"9ff21643a1852581a61fc59e1035751e3af5a6244bee3eaade528d2ac9f8796c"} Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.356785 4727 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4tx6m container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.356855 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" podUID="fd5128ab-6daa-4e73-b4ab-7a1ab98962fa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.356912 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:57 crc kubenswrapper[4727]: E1121 20:08:57.358293 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:57.858266384 +0000 UTC m=+143.044451428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.384839 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.443588 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.446503 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.458243 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:57 crc kubenswrapper[4727]: E1121 20:08:57.459899 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:57.959872471 +0000 UTC m=+143.146057515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.488363 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.542253 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.563777 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:57 crc kubenswrapper[4727]: E1121 20:08:57.567049 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:58.067016003 +0000 UTC m=+143.253201047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.599162 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.667400 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:57 crc kubenswrapper[4727]: E1121 20:08:57.668260 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:58.168239779 +0000 UTC m=+143.354424813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.771399 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:57 crc kubenswrapper[4727]: E1121 20:08:57.772086 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:58.271950239 +0000 UTC m=+143.458135283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.799425 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" podStartSLOduration=122.799403509 podStartE2EDuration="2m2.799403509s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:57.745356644 +0000 UTC m=+142.931541698" watchObservedRunningTime="2025-11-21 20:08:57.799403509 +0000 UTC m=+142.985588543" Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.857602 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7hlm6"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.877208 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:57 crc kubenswrapper[4727]: E1121 20:08:57.877535 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:58.377519473 +0000 UTC m=+143.563704517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.922865 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd"] Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.980018 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:57 crc kubenswrapper[4727]: E1121 20:08:57.980366 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:58.480354168 +0000 UTC m=+143.666539212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:57 crc kubenswrapper[4727]: I1121 20:08:57.994680 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m"] Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.002221 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.022305 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:08:58 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:08:58 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:08:58 crc kubenswrapper[4727]: healthz check failed Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.022812 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.023768 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jq87g"] Nov 21 20:08:58 crc kubenswrapper[4727]: W1121 20:08:58.075535 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ed0ff62_2542_410b_ac29_904eb08bef16.slice/crio-b98a862aebfaba1620d80e02dc5d358893f9a92f8b675decbc00a0dc2df5aa7a WatchSource:0}: Error finding container b98a862aebfaba1620d80e02dc5d358893f9a92f8b675decbc00a0dc2df5aa7a: Status 404 returned error can't find the container with id b98a862aebfaba1620d80e02dc5d358893f9a92f8b675decbc00a0dc2df5aa7a Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.081432 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:58 crc kubenswrapper[4727]: E1121 20:08:58.082058 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:58.582023508 +0000 UTC m=+143.768208552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.131098 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-42c64"] Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.137201 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fn2j6"] Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.183897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:58 crc kubenswrapper[4727]: E1121 20:08:58.184205 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:58.684195422 +0000 UTC m=+143.870380466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.288467 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:58 crc kubenswrapper[4727]: E1121 20:08:58.291141 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:58.791123547 +0000 UTC m=+143.977308591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.304049 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lvhwh"] Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.305545 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5"] Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.317036 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" podStartSLOduration=123.31700448 podStartE2EDuration="2m3.31700448s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:58.301736999 +0000 UTC m=+143.487922043" watchObservedRunningTime="2025-11-21 20:08:58.31700448 +0000 UTC m=+143.503189524" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.391850 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:58 crc kubenswrapper[4727]: E1121 20:08:58.393356 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:58.893338963 +0000 UTC m=+144.079524007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.406600 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" event={"ID":"3ed0ff62-2542-410b-ac29-904eb08bef16","Type":"ContainerStarted","Data":"b98a862aebfaba1620d80e02dc5d358893f9a92f8b675decbc00a0dc2df5aa7a"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.427228 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-97ctf"] Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.428151 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" event={"ID":"54882e90-f639-4a95-ae1b-80f6fdbdaf5f","Type":"ContainerStarted","Data":"ec0ffabe1411aa3bdbe5e004eb4b327409733968d7f13e2f743e00068235e6ea"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.429365 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xpjqs"] Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.459496 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zkhz9"] Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.464420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" event={"ID":"b9b15f63-53a9-40d3-940a-fe8640ebecab","Type":"ContainerStarted","Data":"df4e18ac848dc60f4689fad5f7263d461106f2fb78c8fe527b67ced4275b58da"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.478975 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l4m75" podStartSLOduration=124.478934998 podStartE2EDuration="2m4.478934998s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:58.47693894 +0000 UTC m=+143.663123984" watchObservedRunningTime="2025-11-21 20:08:58.478934998 +0000 UTC m=+143.665120042" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.482059 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6sphz"] Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.484237 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" event={"ID":"4fffc06f-c43a-467f-9159-efdae68bddfb","Type":"ContainerStarted","Data":"28448d8ec66aed229c97023a92dfdd14868821e63599ea382ea2908e1df56962"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.497186 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:58 crc kubenswrapper[4727]: E1121 20:08:58.498949 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:58.998919007 +0000 UTC m=+144.185104051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.518189 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" event={"ID":"a931e936-fb41-4b5a-b2dd-506cd7cec66c","Type":"ContainerStarted","Data":"20d228f33a4287649f4482fd5b7dcbaf01d799cc36b844228f6961e24f80fad4"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.535259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" event={"ID":"d9f7a69e-e7d1-4048-8263-52cfefbc90d5","Type":"ContainerStarted","Data":"1767de4addd8420b70220e22b8b10ecb452a152499316c1d305c99a5ef9098a6"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.547664 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dw7dg" podStartSLOduration=124.547645935 podStartE2EDuration="2m4.547645935s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:58.51866127 +0000 UTC m=+143.704846314" watchObservedRunningTime="2025-11-21 20:08:58.547645935 +0000 UTC m=+143.733830979" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.581407 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" event={"ID":"58dc7f0b-2626-478d-a541-511adc47db56","Type":"ContainerStarted","Data":"734fcf9dfca0069ff926f76ceeab521e0c654cc9b4d7297341ed6f1a3ce7c584"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.584866 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" event={"ID":"72783796-a3ab-4a34-9b9e-b4df16dd1cc2","Type":"ContainerStarted","Data":"87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.586412 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.587281 4727 generic.go:334] "Generic (PLEG): container finished" podID="cafc19d0-a511-4cea-bd92-c28a18224e9f" containerID="0e18687980423000f5dbba21b06d5e4de5a1b8074b8b226ad9b5412411f8bd2b" exitCode=0 Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.587447 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" event={"ID":"cafc19d0-a511-4cea-bd92-c28a18224e9f","Type":"ContainerDied","Data":"0e18687980423000f5dbba21b06d5e4de5a1b8074b8b226ad9b5412411f8bd2b"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.588438 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" event={"ID":"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd","Type":"ContainerStarted","Data":"9325fb8eeefafd65703ab1058dabb6d73fe42b8503000065b3383f176dbc4a73"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.598888 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" event={"ID":"a175fd16-a4b3-4df9-88f7-3110c4d6c40f","Type":"ContainerStarted","Data":"8c7ca709326654fa480e92beedb17cef44d3b12b3c8b511620255e0592378886"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.599221 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.600049 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:58 crc kubenswrapper[4727]: E1121 20:08:58.601218 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.101204965 +0000 UTC m=+144.287390009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.608921 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" event={"ID":"fd2bb95c-fa68-47b4-bb37-1f1724773a74","Type":"ContainerStarted","Data":"1feffe3de25ef5f4681e81508246f0c31a12720ec6b64c325dd9d75d84883296"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.612026 4727 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hx72f container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.612080 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" podUID="72783796-a3ab-4a34-9b9e-b4df16dd1cc2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.703643 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:58 crc kubenswrapper[4727]: E1121 20:08:58.705089 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.20506888 +0000 UTC m=+144.391253924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.712026 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" event={"ID":"385471ac-e2f4-478f-a64e-8cb60d37cdd7","Type":"ContainerStarted","Data":"b133ead442489c58165e8bb9218715a5bb846078a91b90addb7d147e103e27ea"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.727844 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-fwqq7" podStartSLOduration=124.727825521 podStartE2EDuration="2m4.727825521s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:58.682719181 +0000 UTC m=+143.868904215" watchObservedRunningTime="2025-11-21 20:08:58.727825521 +0000 UTC m=+143.914010555" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.735647 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" event={"ID":"36ff6a87-1405-42da-9fd2-5bf32fa6578d","Type":"ContainerStarted","Data":"1a68bab7c1c87abde285d983bc43443c0c5155cfe23d4669349ae73667602ab0"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.746085 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" event={"ID":"c900b720-71c1-41ac-bf6d-f554450c4d44","Type":"ContainerStarted","Data":"c5f7c5c5d2de757c3c27402355b1bb1e0b8a34983d3dd9ee628a49d74c302ba4"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.753240 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-42c64" event={"ID":"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7","Type":"ContainerStarted","Data":"172e170eb7ab7eacdeab7488f75c8c234519dff956be4b561243bd7e283aa147"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.757486 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nzpzh" event={"ID":"3f3303e9-97ef-4e9b-9dc0-076066682c43","Type":"ContainerStarted","Data":"7e6813a13e04d598b8b57c044c9dc774392f07b85ea53a4122b007733ab93f73"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.758171 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-nzpzh" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.759362 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzpzh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.759437 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzpzh" podUID="3f3303e9-97ef-4e9b-9dc0-076066682c43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.761165 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" event={"ID":"42408e00-5bcd-4405-82fe-851c9b62b149","Type":"ContainerStarted","Data":"3560d57f28c940828a166713d6e04c7c0947819018eb906e1436740e91e41684"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.763419 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" event={"ID":"200ac8c1-4bf1-4356-8091-9279fc08523f","Type":"ContainerStarted","Data":"46a0c0d950ff0e72f1af198e359285dfd004527053925aa0ac25f614cf1c5713"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.764692 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" event={"ID":"5191a87d-ec94-4c7f-95eb-5898535d524b","Type":"ContainerStarted","Data":"c48cb49be7046772fedd2a89e59616e0e0f5cba0a7889ad700ac84f5b4e76593"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.780303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" event={"ID":"900a1b4b-fc56-4edd-b115-bbd76db83b12","Type":"ContainerStarted","Data":"ec16e2faee8a6ec8663fee05705d8c3a2c3ee33e0603b2816f1b4333c895580d"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.782244 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" podStartSLOduration=124.782220257 podStartE2EDuration="2m4.782220257s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:58.779772034 +0000 UTC m=+143.965957088" watchObservedRunningTime="2025-11-21 20:08:58.782220257 +0000 UTC m=+143.968405301" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.796752 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" event={"ID":"343653aa-5654-4622-aa3e-045685abb471","Type":"ContainerStarted","Data":"823f635dbbbefa275927694ee990009a012dac7d6d2e85cb083d204cd380c9e5"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.807452 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:58 crc kubenswrapper[4727]: E1121 20:08:58.808889 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.308877943 +0000 UTC m=+144.495062987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.809375 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-g4rwm" event={"ID":"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91","Type":"ContainerStarted","Data":"a2a2abc8c5f7fdb6d0fe9691e8297894a30cae763899c1e5b8edefe4df5cb9dc"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.817507 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" event={"ID":"7a12de68-3f7e-4465-8325-db0a17c35cf0","Type":"ContainerStarted","Data":"ab582c397ce957832102ea7ba8bb426c947302dc7274afd551905e71851c1431"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.831743 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qrsz4" podStartSLOduration=124.831726577 podStartE2EDuration="2m4.831726577s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:58.802633198 +0000 UTC m=+143.988818242" watchObservedRunningTime="2025-11-21 20:08:58.831726577 +0000 UTC m=+144.017911621" Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.856781 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fj2k4" event={"ID":"dacc1ea4-7062-46ad-a784-70537e92dc51","Type":"ContainerStarted","Data":"590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.891755 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" event={"ID":"86100e7c-cbfb-46c4-812d-f1ad2eb51b11","Type":"ContainerStarted","Data":"d33f7c730935ce7dc8e5225279c827b43cecf559a31a44108100cd6f64e7bed7"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.891803 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" event={"ID":"86100e7c-cbfb-46c4-812d-f1ad2eb51b11","Type":"ContainerStarted","Data":"e383891371f8fabeec19c900a1e882a58b33eb2bc5d1b1b30165fd11940a84dc"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.907494 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" event={"ID":"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50","Type":"ContainerStarted","Data":"bca27ea44e662d0623fbd7e9e51d1b9d9f8fde4edcc90a8c1eb3875c6a229ad5"} Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.908342 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:58 crc kubenswrapper[4727]: E1121 20:08:58.908686 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.408652997 +0000 UTC m=+144.594838041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.908973 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:58 crc kubenswrapper[4727]: E1121 20:08:58.914980 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.414935001 +0000 UTC m=+144.601120215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:58 crc kubenswrapper[4727]: I1121 20:08:58.936915 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4tx6m" Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.018862 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.021929 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.521907158 +0000 UTC m=+144.708092192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.025793 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.032646 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:08:59 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:08:59 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:08:59 crc kubenswrapper[4727]: healthz check failed Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.032727 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.035112 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.535094537 +0000 UTC m=+144.721279581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.070623 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" podStartSLOduration=124.070598125 podStartE2EDuration="2m4.070598125s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:59.049517023 +0000 UTC m=+144.235702077" watchObservedRunningTime="2025-11-21 20:08:59.070598125 +0000 UTC m=+144.256783169" Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.126866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.127704 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.627682718 +0000 UTC m=+144.813867772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.230250 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.230714 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.730697348 +0000 UTC m=+144.916882392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.272747 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qwplk" podStartSLOduration=125.272720238 podStartE2EDuration="2m5.272720238s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:59.268604946 +0000 UTC m=+144.454789990" watchObservedRunningTime="2025-11-21 20:08:59.272720238 +0000 UTC m=+144.458905282" Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.319820 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p2c9z" podStartSLOduration=125.319792077 podStartE2EDuration="2m5.319792077s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:59.31176449 +0000 UTC m=+144.497949534" watchObservedRunningTime="2025-11-21 20:08:59.319792077 +0000 UTC m=+144.505977121" Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.331329 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.332199 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.832165181 +0000 UTC m=+145.018350225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.332370 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.332741 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.832725218 +0000 UTC m=+145.018910262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.403662 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lqm45" podStartSLOduration=125.40364064 podStartE2EDuration="2m5.40364064s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:59.36226214 +0000 UTC m=+144.548447184" watchObservedRunningTime="2025-11-21 20:08:59.40364064 +0000 UTC m=+144.589825684" Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.435322 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.435981 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:08:59.935930204 +0000 UTC m=+145.122115248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.440508 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-nzpzh" podStartSLOduration=125.440484488 podStartE2EDuration="2m5.440484488s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:59.437593313 +0000 UTC m=+144.623778357" watchObservedRunningTime="2025-11-21 20:08:59.440484488 +0000 UTC m=+144.626669532" Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.531655 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" podStartSLOduration=124.531632847 podStartE2EDuration="2m4.531632847s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:59.528615308 +0000 UTC m=+144.714800342" watchObservedRunningTime="2025-11-21 20:08:59.531632847 +0000 UTC m=+144.717817881" Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.547806 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.548176 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:00.048163945 +0000 UTC m=+145.234348989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.648937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.649853 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:00.149834944 +0000 UTC m=+145.336019988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.753974 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.754405 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:00.254390599 +0000 UTC m=+145.440575643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.855683 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.856067 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:00.356051569 +0000 UTC m=+145.542236613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.957977 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:08:59 crc kubenswrapper[4727]: E1121 20:08:59.958799 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:00.45878519 +0000 UTC m=+145.644970234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.980360 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" event={"ID":"208f96f5-b245-4b8d-96d0-5210189d0f13","Type":"ContainerStarted","Data":"b9abff5e9ff3316d3c228a90867bb837f8515e1715696a1ffea72f5168dc45c3"} Nov 21 20:08:59 crc kubenswrapper[4727]: I1121 20:08:59.990348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" event={"ID":"58a72b98-691c-4da3-a1df-5cdc793b9ff5","Type":"ContainerStarted","Data":"1e606506e8e0f338372ee49094fc36ee611cb7f3314d4c77058da0370bc669e2"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.012026 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" event={"ID":"e759f1df-d65f-4297-b178-697608983c6f","Type":"ContainerStarted","Data":"2b5372479b4c665b3bc85704e4f45e8c4f08ea96afb091c8d86e613b9a219de6"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.018083 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:00 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:00 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:00 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.018129 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.024259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-g4rwm" event={"ID":"80d0d1a3-fc38-48aa-9adf-2f4a71e70c91","Type":"ContainerStarted","Data":"2c267f676893f26a8d3cb313c73bda88564550216536618bc9690780cffd89d1"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.035570 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fj2k4" podStartSLOduration=126.035547515 podStartE2EDuration="2m6.035547515s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:08:59.581836248 +0000 UTC m=+144.768021292" watchObservedRunningTime="2025-11-21 20:09:00.035547515 +0000 UTC m=+145.221732559" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.061009 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:00 crc kubenswrapper[4727]: E1121 20:09:00.062723 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:00.562699135 +0000 UTC m=+145.748884179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.075007 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-42c64" event={"ID":"67e5b1f6-a350-46f7-8e5d-5e618ea5a2d7","Type":"ContainerStarted","Data":"12a767de6a4d902d5f430e66b3f8ae89f705292d6174a5bfe15267497e43d34c"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.086225 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l7mfn" event={"ID":"343653aa-5654-4622-aa3e-045685abb471","Type":"ContainerStarted","Data":"62d0320f0bae86eb17d194b8c92c7ed602c87e069d940c81a82c0bc11ea606b4"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.116191 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-g4rwm" podStartSLOduration=7.116171553 podStartE2EDuration="7.116171553s" podCreationTimestamp="2025-11-21 20:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.108170317 +0000 UTC m=+145.294355361" watchObservedRunningTime="2025-11-21 20:09:00.116171553 +0000 UTC m=+145.302356597" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.116926 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hjqgw" podStartSLOduration=126.116919975 podStartE2EDuration="2m6.116919975s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.060639865 +0000 UTC m=+145.246824929" watchObservedRunningTime="2025-11-21 20:09:00.116919975 +0000 UTC m=+145.303105019" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.142929 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" event={"ID":"c900b720-71c1-41ac-bf6d-f554450c4d44","Type":"ContainerStarted","Data":"669ff9d0f6d6ac1ccd8c5886e123e1d6ffc8bdda5692735ab4a3c7e45fcb1892"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.159222 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vtwcr" podStartSLOduration=126.159189522 podStartE2EDuration="2m6.159189522s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.142829219 +0000 UTC m=+145.329014263" watchObservedRunningTime="2025-11-21 20:09:00.159189522 +0000 UTC m=+145.345374566" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.163666 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" event={"ID":"1599f256-87d8-47f4-a6fb-6cea3b58b242","Type":"ContainerStarted","Data":"f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.165256 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:00 crc kubenswrapper[4727]: E1121 20:09:00.168038 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:00.668019772 +0000 UTC m=+145.854204816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.173074 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.177876 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-42c64" podStartSLOduration=125.177846662 podStartE2EDuration="2m5.177846662s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.174107313 +0000 UTC m=+145.360292357" watchObservedRunningTime="2025-11-21 20:09:00.177846662 +0000 UTC m=+145.364031706" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.201698 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xpjqs" event={"ID":"8a65a43f-bd75-428b-8709-73364367d020","Type":"ContainerStarted","Data":"e2d68cf7c5739ff80c3a304d73b486863c264e996a6e01b2e89f8c0a73f82390"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.265202 4727 generic.go:334] "Generic (PLEG): container finished" podID="54882e90-f639-4a95-ae1b-80f6fdbdaf5f" containerID="f9edc777c5320e2a4071351f8498c968d6e1de3c76c38e91157bd0e461fccf23" exitCode=0 Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.265273 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" event={"ID":"54882e90-f639-4a95-ae1b-80f6fdbdaf5f","Type":"ContainerDied","Data":"f9edc777c5320e2a4071351f8498c968d6e1de3c76c38e91157bd0e461fccf23"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.266636 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:00 crc kubenswrapper[4727]: E1121 20:09:00.267904 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:00.7678901 +0000 UTC m=+145.954075144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.283642 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" podStartSLOduration=126.283618694 podStartE2EDuration="2m6.283618694s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.241403378 +0000 UTC m=+145.427588422" watchObservedRunningTime="2025-11-21 20:09:00.283618694 +0000 UTC m=+145.469803738" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.318178 4727 generic.go:334] "Generic (PLEG): container finished" podID="b49f037a-e7ec-45ef-846b-79ab549adb90" containerID="bf13d7f60bbea5eba3f66c883bdaf018d3691710123b677233fb60babc94f161" exitCode=0 Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.318285 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4855z" event={"ID":"b49f037a-e7ec-45ef-846b-79ab549adb90","Type":"ContainerDied","Data":"bf13d7f60bbea5eba3f66c883bdaf018d3691710123b677233fb60babc94f161"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.328842 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ggxjh" podStartSLOduration=126.328826917 podStartE2EDuration="2m6.328826917s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.284628784 +0000 UTC m=+145.470813828" watchObservedRunningTime="2025-11-21 20:09:00.328826917 +0000 UTC m=+145.515011961" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.355250 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5" event={"ID":"82591a0f-a7cf-4745-9b64-637625f63662","Type":"ContainerStarted","Data":"7fddabeacd28a14ef3db865ff628fdf6d3fdb45612275052c5910f302b8ea4f1"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.370097 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:00 crc kubenswrapper[4727]: E1121 20:09:00.371517 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:00.871484326 +0000 UTC m=+146.057669370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.373387 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" event={"ID":"385471ac-e2f4-478f-a64e-8cb60d37cdd7","Type":"ContainerStarted","Data":"282bc81f9b05e90bf267251179932984ff775b089c9051c901fd447686e73533"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.374563 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.386042 4727 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pcvjk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.386109 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" podUID="385471ac-e2f4-478f-a64e-8cb60d37cdd7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.434143 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lvhwh" event={"ID":"ac14f084-6765-4f2c-badc-b913eb4187b5","Type":"ContainerStarted","Data":"34d8af4b49e799159c0421a8856dd7fd267a5a68d69cd9ae8d7491a1d6b3d87d"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.447300 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" event={"ID":"3ed0ff62-2542-410b-ac29-904eb08bef16","Type":"ContainerStarted","Data":"5062a3d94732e05ee388d2c3e9129c72ed8ea9c32538b7820296a31f660c16eb"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.462265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" event={"ID":"5191a87d-ec94-4c7f-95eb-5898535d524b","Type":"ContainerStarted","Data":"9a791e22351b251a694a19eb5f5b59d98b606eaf8c43320264355f569360a8eb"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.472251 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:00 crc kubenswrapper[4727]: E1121 20:09:00.473846 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:00.973827256 +0000 UTC m=+146.160012300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.478556 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vhwbd" podStartSLOduration=126.478531534 podStartE2EDuration="2m6.478531534s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.475274898 +0000 UTC m=+145.661459942" watchObservedRunningTime="2025-11-21 20:09:00.478531534 +0000 UTC m=+145.664716578" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.488204 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" podStartSLOduration=125.488160418 podStartE2EDuration="2m5.488160418s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.400250334 +0000 UTC m=+145.586435368" watchObservedRunningTime="2025-11-21 20:09:00.488160418 +0000 UTC m=+145.674345492" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.514641 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" podStartSLOduration=126.514613719 podStartE2EDuration="2m6.514613719s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.509871869 +0000 UTC m=+145.696056913" watchObservedRunningTime="2025-11-21 20:09:00.514613719 +0000 UTC m=+145.700798763" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.524037 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" event={"ID":"a931e936-fb41-4b5a-b2dd-506cd7cec66c","Type":"ContainerStarted","Data":"907fcd1a465be74b49436b3a2e102e5b348d589b7b35c1b000b681009215434a"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.524562 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.536698 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" event={"ID":"d9f7a69e-e7d1-4048-8263-52cfefbc90d5","Type":"ContainerStarted","Data":"5a3e523b6cd9a2cb7bc0497a09157dcb8ebd0e88e784e1b16c41e9322d2ce3af"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.566061 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" podStartSLOduration=125.566041367 podStartE2EDuration="2m5.566041367s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.562575704 +0000 UTC m=+145.748760748" watchObservedRunningTime="2025-11-21 20:09:00.566041367 +0000 UTC m=+145.752226411" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.578141 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:00 crc kubenswrapper[4727]: E1121 20:09:00.581556 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:01.081536054 +0000 UTC m=+146.267721318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.582349 4727 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-cpgqd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.582435 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" podUID="a931e936-fb41-4b5a-b2dd-506cd7cec66c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.597395 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" event={"ID":"0b0c5ddc-0891-4a5f-bf5d-6ab1d9dc0fbd","Type":"ContainerStarted","Data":"59ffee276c62b3e5cdf14ea2ad634fb5cf0019d07d29f8721d08f8d1d51db876"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.616092 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" event={"ID":"a175fd16-a4b3-4df9-88f7-3110c4d6c40f","Type":"ContainerStarted","Data":"0c96a0fa66f838e7df92f5fbd2248dbda22143be136345c69fb766be7c066869"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.626626 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" podStartSLOduration=126.626615793 podStartE2EDuration="2m6.626615793s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.623687097 +0000 UTC m=+145.809872131" watchObservedRunningTime="2025-11-21 20:09:00.626615793 +0000 UTC m=+145.812800837" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.651394 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-98qwb" podStartSLOduration=126.651372864 podStartE2EDuration="2m6.651372864s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.645294714 +0000 UTC m=+145.831479758" watchObservedRunningTime="2025-11-21 20:09:00.651372864 +0000 UTC m=+145.837557898" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.663690 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zkhz9" event={"ID":"57aff7f6-8b72-4491-b670-57c1c48e93e0","Type":"ContainerStarted","Data":"59208fbbb9c11da3f2ec92951d09b9c0002fcb313b7cb85dfbd49a1348a2e911"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.679996 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:00 crc kubenswrapper[4727]: E1121 20:09:00.680635 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:01.180621727 +0000 UTC m=+146.366806771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.693623 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6sphz" event={"ID":"19075424-77c6-47ff-93fa-d54902e69f7c","Type":"ContainerStarted","Data":"98f74677d770776a27368e90ae955a0b129e1b4266b502b2178473f5f3c0b481"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.704047 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-zkhz9" podStartSLOduration=6.704032647 podStartE2EDuration="6.704032647s" podCreationTimestamp="2025-11-21 20:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.702468161 +0000 UTC m=+145.888653205" watchObservedRunningTime="2025-11-21 20:09:00.704032647 +0000 UTC m=+145.890217691" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.771048 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" event={"ID":"200ac8c1-4bf1-4356-8091-9279fc08523f","Type":"ContainerStarted","Data":"33e690bc8fa5200bd4358fa8bb537295e60e2025b8baee0c8aface1563d719ff"} Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.775086 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzpzh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.775125 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzpzh" podUID="3f3303e9-97ef-4e9b-9dc0-076066682c43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.784860 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:00 crc kubenswrapper[4727]: E1121 20:09:00.798948 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:01.298917037 +0000 UTC m=+146.485102081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.832435 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" podStartSLOduration=126.832405635 podStartE2EDuration="2m6.832405635s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:00.820592806 +0000 UTC m=+146.006777850" watchObservedRunningTime="2025-11-21 20:09:00.832405635 +0000 UTC m=+146.018590679" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.835064 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:09:00 crc kubenswrapper[4727]: I1121 20:09:00.887356 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:00 crc kubenswrapper[4727]: E1121 20:09:00.889196 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:01.389169329 +0000 UTC m=+146.575354453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:00.992671 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:01 crc kubenswrapper[4727]: E1121 20:09:00.993301 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:01.493289282 +0000 UTC m=+146.679474326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.016269 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:01 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:01 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:01 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.016320 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.098544 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:01 crc kubenswrapper[4727]: E1121 20:09:01.098975 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:01.598937639 +0000 UTC m=+146.785122683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.170478 4727 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zqbhr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.170544 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" podUID="1599f256-87d8-47f4-a6fb-6cea3b58b242" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.200371 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:01 crc kubenswrapper[4727]: E1121 20:09:01.200770 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:01.700759133 +0000 UTC m=+146.886944177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.302494 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:01 crc kubenswrapper[4727]: E1121 20:09:01.302854 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:01.802837355 +0000 UTC m=+146.989022399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.407487 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:01 crc kubenswrapper[4727]: E1121 20:09:01.408095 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:01.90808445 +0000 UTC m=+147.094269494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.455631 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.509624 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:01 crc kubenswrapper[4727]: E1121 20:09:01.510405 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.010390418 +0000 UTC m=+147.196575462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.613169 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:01 crc kubenswrapper[4727]: E1121 20:09:01.613580 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.113565403 +0000 UTC m=+147.299750447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.720744 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:01 crc kubenswrapper[4727]: E1121 20:09:01.724110 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.224061203 +0000 UTC m=+147.410246247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.785037 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5" event={"ID":"82591a0f-a7cf-4745-9b64-637625f63662","Type":"ContainerStarted","Data":"95202e911d68cb2ec6ed64023970bdff2b7e3f2f32158ee34c35cacaa2a0158b"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.785568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5" event={"ID":"82591a0f-a7cf-4745-9b64-637625f63662","Type":"ContainerStarted","Data":"eb452d4f6f34367f9a01f7ee7c1bc6f600fe78ffff1d44329fc43cdd7e904eb8"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.788301 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" event={"ID":"cafc19d0-a511-4cea-bd92-c28a18224e9f","Type":"ContainerStarted","Data":"687924aa7f106f77df43d9b6f543bfaecb1f309119ed11a915874d5ace926c86"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.788441 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.795399 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" event={"ID":"a606ebf1-1f2a-4c31-b2f8-43c13ef31c50","Type":"ContainerStarted","Data":"c57d370eecd788c2ab4c12a2ccd6674030233aab7286dd12da6f64b30ec711e5"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.802218 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25c8d" event={"ID":"5191a87d-ec94-4c7f-95eb-5898535d524b","Type":"ContainerStarted","Data":"43eac65b20288878ca53d263a40a9d79bc356a6aa07619ce1c34e1081149bc33"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.812514 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4855z" event={"ID":"b49f037a-e7ec-45ef-846b-79ab549adb90","Type":"ContainerStarted","Data":"e5aa355641ecee45ca99dcdf1df04d506d1c94a596acdc79b4664eac3d342324"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.817839 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" event={"ID":"4fffc06f-c43a-467f-9159-efdae68bddfb","Type":"ContainerStarted","Data":"6241502d42a20fab86390df8662c60d422e660074c734df56178c2203d92f908"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.817875 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" event={"ID":"4fffc06f-c43a-467f-9159-efdae68bddfb","Type":"ContainerStarted","Data":"35469efcf89b1380ebf40e5caa4f0f94edcce805006df0c30fe3a96dc155b0aa"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.822206 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:01 crc kubenswrapper[4727]: E1121 20:09:01.823533 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.323513447 +0000 UTC m=+147.509698481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.825095 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6sphz" event={"ID":"19075424-77c6-47ff-93fa-d54902e69f7c","Type":"ContainerStarted","Data":"6de20c0a2b4a1b81763c6314fff94e27fbbd793a268afd20d14d6436b9772e93"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.829924 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" event={"ID":"86100e7c-cbfb-46c4-812d-f1ad2eb51b11","Type":"ContainerStarted","Data":"afac58fe7b648fb91ca940f691c7b041d8db86f2d613324e99a01d6c6bebb618"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.835132 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xpjqs" event={"ID":"8a65a43f-bd75-428b-8709-73364367d020","Type":"ContainerStarted","Data":"ceb5a22182e557ac3341f8fa70a4d4d02397d7ed3db2f5f9f80e302c7c06d96f"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.835727 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.837435 4727 patch_prober.go:28] interesting pod/console-operator-58897d9998-xpjqs container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/readyz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.837501 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xpjqs" podUID="8a65a43f-bd75-428b-8709-73364367d020" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/readyz\": dial tcp 10.217.0.41:8443: connect: connection refused" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.839684 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" event={"ID":"e759f1df-d65f-4297-b178-697608983c6f","Type":"ContainerStarted","Data":"a61878b95bddf0330abd8b5b485290ed3d2f1814ccb0e787bb6ce3c9014e094f"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.839716 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" event={"ID":"e759f1df-d65f-4297-b178-697608983c6f","Type":"ContainerStarted","Data":"91bac84a7fbfefb638e552bc4807b5f9c994a86b7f67f68047eb63e2adc14e83"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.844690 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" event={"ID":"42408e00-5bcd-4405-82fe-851c9b62b149","Type":"ContainerStarted","Data":"de8be5db244198a1c714093616a8010bf778e13e40231fc9b8b2ae0fc1541d58"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.844840 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8vpg5" podStartSLOduration=127.844828976 podStartE2EDuration="2m7.844828976s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:01.841795456 +0000 UTC m=+147.027980500" watchObservedRunningTime="2025-11-21 20:09:01.844828976 +0000 UTC m=+147.031014020" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.845331 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.850696 4727 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fn2j6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.850750 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" podUID="42408e00-5bcd-4405-82fe-851c9b62b149" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.855054 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" event={"ID":"200ac8c1-4bf1-4356-8091-9279fc08523f","Type":"ContainerStarted","Data":"c8d2c84ff7480b6b23867243e73a862f3dbe84d8b3d7a00e4aea8736cb3cf99f"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.866312 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lvhwh" event={"ID":"ac14f084-6765-4f2c-badc-b913eb4187b5","Type":"ContainerStarted","Data":"f58b2d11dc31d195fbd916999f5584d212cbc8b33a3ee9a5bfadcf4d534d49b5"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.866353 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lvhwh" event={"ID":"ac14f084-6765-4f2c-badc-b913eb4187b5","Type":"ContainerStarted","Data":"652e4c00741172296e48489773f71bab91c335e1ed6a1aab11ecab7ff6ce4d87"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.867138 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lvhwh" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.873290 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" event={"ID":"54882e90-f639-4a95-ae1b-80f6fdbdaf5f","Type":"ContainerStarted","Data":"2377b2ccbbecbbf6b207ad6bff6f959f9a2dd58f52fcf8c6b43c74cec82f9559"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.877148 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" event={"ID":"7a12de68-3f7e-4465-8325-db0a17c35cf0","Type":"ContainerStarted","Data":"0a250834f435da69f654e57fd3a834e0591e3c420a749078948be26a4f06a244"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.878242 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" podStartSLOduration=127.878225911 podStartE2EDuration="2m7.878225911s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:01.877765368 +0000 UTC m=+147.063950412" watchObservedRunningTime="2025-11-21 20:09:01.878225911 +0000 UTC m=+147.064410955" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.882385 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zkhz9" event={"ID":"57aff7f6-8b72-4491-b670-57c1c48e93e0","Type":"ContainerStarted","Data":"00f9fbfe3ce7233495d24a66a48968d7dae5cc727f696fcb096c56b882c39c20"} Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.885279 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzpzh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.885344 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzpzh" podUID="3f3303e9-97ef-4e9b-9dc0-076066682c43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.890649 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pcvjk" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.893668 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.911361 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xpjqs" podStartSLOduration=127.911337498 podStartE2EDuration="2m7.911337498s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:01.909440332 +0000 UTC m=+147.095625376" watchObservedRunningTime="2025-11-21 20:09:01.911337498 +0000 UTC m=+147.097522542" Nov 21 20:09:01 crc kubenswrapper[4727]: I1121 20:09:01.924673 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:01 crc kubenswrapper[4727]: E1121 20:09:01.944096 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.444055784 +0000 UTC m=+147.630240828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.020152 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-jq87g" podStartSLOduration=128.020128488 podStartE2EDuration="2m8.020128488s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:01.9656312 +0000 UTC m=+147.151816244" watchObservedRunningTime="2025-11-21 20:09:02.020128488 +0000 UTC m=+147.206313532" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.027429 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.027999 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.52798266 +0000 UTC m=+147.714167704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.032257 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:02 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:02 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:02 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.032318 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.099739 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-97ctf" podStartSLOduration=128.099710876 podStartE2EDuration="2m8.099710876s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:02.05918447 +0000 UTC m=+147.245369514" watchObservedRunningTime="2025-11-21 20:09:02.099710876 +0000 UTC m=+147.285895920" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.102845 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7hlm6" podStartSLOduration=128.102838868 podStartE2EDuration="2m8.102838868s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:02.102313003 +0000 UTC m=+147.288498037" watchObservedRunningTime="2025-11-21 20:09:02.102838868 +0000 UTC m=+147.289023912" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.128897 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.129560 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.629545996 +0000 UTC m=+147.815731040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.131707 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lmjlx" podStartSLOduration=128.13169301 podStartE2EDuration="2m8.13169301s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:02.129927128 +0000 UTC m=+147.316112172" watchObservedRunningTime="2025-11-21 20:09:02.13169301 +0000 UTC m=+147.317878054" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.185651 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lvhwh" podStartSLOduration=9.185633371 podStartE2EDuration="9.185633371s" podCreationTimestamp="2025-11-21 20:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:02.183977442 +0000 UTC m=+147.370162486" watchObservedRunningTime="2025-11-21 20:09:02.185633371 +0000 UTC m=+147.371818415" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.230540 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.230858 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.730845365 +0000 UTC m=+147.917030409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.256854 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-cpgqd" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.288878 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" podStartSLOduration=127.288861877 podStartE2EDuration="2m7.288861877s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:02.287071334 +0000 UTC m=+147.473256378" watchObservedRunningTime="2025-11-21 20:09:02.288861877 +0000 UTC m=+147.475046921" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.313346 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qms5m" podStartSLOduration=128.313317559 podStartE2EDuration="2m8.313317559s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:02.310370291 +0000 UTC m=+147.496555335" watchObservedRunningTime="2025-11-21 20:09:02.313317559 +0000 UTC m=+147.499502603" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.331206 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.331356 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.831328989 +0000 UTC m=+148.017514033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.331575 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.331894 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.831886706 +0000 UTC m=+148.018071750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.432424 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.432608 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.932580277 +0000 UTC m=+148.118765321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.432670 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.432718 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.432747 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.432770 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.432795 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.433278 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:02.933262327 +0000 UTC m=+148.119447371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.433898 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.450631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.451104 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.451677 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" podStartSLOduration=127.451656739 podStartE2EDuration="2m7.451656739s" podCreationTimestamp="2025-11-21 20:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:02.389325131 +0000 UTC m=+147.575510175" watchObservedRunningTime="2025-11-21 20:09:02.451656739 +0000 UTC m=+147.637841783" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.453119 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.521383 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.533971 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.534153 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.034122593 +0000 UTC m=+148.220307637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.534192 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.534536 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.034524945 +0000 UTC m=+148.220709979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.635086 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.635247 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.135222215 +0000 UTC m=+148.321407259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.635312 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.635689 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.135673899 +0000 UTC m=+148.321859033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.718150 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.736768 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.737073 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.737251 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.237225055 +0000 UTC m=+148.423410089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.737378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.737696 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.237688989 +0000 UTC m=+148.423874033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.840486 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.841060 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.341046168 +0000 UTC m=+148.527231212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.915304 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9f7a69e-e7d1-4048-8263-52cfefbc90d5" containerID="5a3e523b6cd9a2cb7bc0497a09157dcb8ebd0e88e784e1b16c41e9322d2ce3af" exitCode=0 Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.915371 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" event={"ID":"d9f7a69e-e7d1-4048-8263-52cfefbc90d5","Type":"ContainerDied","Data":"5a3e523b6cd9a2cb7bc0497a09157dcb8ebd0e88e784e1b16c41e9322d2ce3af"} Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.929269 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4855z" event={"ID":"b49f037a-e7ec-45ef-846b-79ab549adb90","Type":"ContainerStarted","Data":"4e73a76a7427d712b41b4ad9088c6ecc0b1977350f303a93d47cc508aa04c91b"} Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.934022 4727 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fn2j6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.934075 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" podUID="42408e00-5bcd-4405-82fe-851c9b62b149" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 21 20:09:02 crc kubenswrapper[4727]: I1121 20:09:02.942670 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:02 crc kubenswrapper[4727]: E1121 20:09:02.945827 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.445814539 +0000 UTC m=+148.631999573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.010036 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:03 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:03 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:03 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.010100 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.043588 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:03 crc kubenswrapper[4727]: E1121 20:09:03.043865 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.543850132 +0000 UTC m=+148.730035176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.083152 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4855z" podStartSLOduration=129.083134761 podStartE2EDuration="2m9.083134761s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:03.080482963 +0000 UTC m=+148.266668007" watchObservedRunningTime="2025-11-21 20:09:03.083134761 +0000 UTC m=+148.269319805" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.127771 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ddqc7"] Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.147509 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.164203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbcqk\" (UniqueName: \"kubernetes.io/projected/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-kube-api-access-tbcqk\") pod \"community-operators-ddqc7\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.164371 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.164443 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-catalog-content\") pod \"community-operators-ddqc7\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.164477 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-utilities\") pod \"community-operators-ddqc7\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: E1121 20:09:03.165095 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.665076859 +0000 UTC m=+148.851261903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.174450 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.207036 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ddqc7"] Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.264109 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6kxgt"] Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.265214 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.266032 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.266341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-utilities\") pod \"community-operators-ddqc7\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.266400 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbcqk\" (UniqueName: \"kubernetes.io/projected/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-kube-api-access-tbcqk\") pod \"community-operators-ddqc7\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.266480 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-catalog-content\") pod \"community-operators-ddqc7\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.267063 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-catalog-content\") pod \"community-operators-ddqc7\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: E1121 20:09:03.267158 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.76713442 +0000 UTC m=+148.953319464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.271055 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-utilities\") pod \"community-operators-ddqc7\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.287401 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.287687 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6kxgt"] Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.315884 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbcqk\" (UniqueName: \"kubernetes.io/projected/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-kube-api-access-tbcqk\") pod \"community-operators-ddqc7\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.367840 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-utilities\") pod \"certified-operators-6kxgt\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.368402 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92xx2\" (UniqueName: \"kubernetes.io/projected/8bcc1896-81fc-4fce-bd16-aa2bc5350617-kube-api-access-92xx2\") pod \"certified-operators-6kxgt\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.368473 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-catalog-content\") pod \"certified-operators-6kxgt\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.368515 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:03 crc kubenswrapper[4727]: E1121 20:09:03.368975 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.868940434 +0000 UTC m=+149.055125478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.448936 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hdpbg"] Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.451246 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.470038 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.470380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-catalog-content\") pod \"certified-operators-6kxgt\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.470471 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-utilities\") pod \"community-operators-hdpbg\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.470510 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-utilities\") pod \"certified-operators-6kxgt\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.470541 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-catalog-content\") pod \"community-operators-hdpbg\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.470576 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdw9p\" (UniqueName: \"kubernetes.io/projected/f364beec-bdfd-41b4-9562-26e8a0a06e27-kube-api-access-tdw9p\") pod \"community-operators-hdpbg\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.470607 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92xx2\" (UniqueName: \"kubernetes.io/projected/8bcc1896-81fc-4fce-bd16-aa2bc5350617-kube-api-access-92xx2\") pod \"certified-operators-6kxgt\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: E1121 20:09:03.471105 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:03.971086537 +0000 UTC m=+149.157271581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.471520 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-catalog-content\") pod \"certified-operators-6kxgt\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.471798 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-utilities\") pod \"certified-operators-6kxgt\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.513930 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92xx2\" (UniqueName: \"kubernetes.io/projected/8bcc1896-81fc-4fce-bd16-aa2bc5350617-kube-api-access-92xx2\") pod \"certified-operators-6kxgt\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.556843 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.566138 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hdpbg"] Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.572108 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-catalog-content\") pod \"community-operators-hdpbg\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.572162 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdw9p\" (UniqueName: \"kubernetes.io/projected/f364beec-bdfd-41b4-9562-26e8a0a06e27-kube-api-access-tdw9p\") pod \"community-operators-hdpbg\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.572229 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.572287 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-utilities\") pod \"community-operators-hdpbg\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.572771 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-utilities\") pod \"community-operators-hdpbg\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.573057 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-catalog-content\") pod \"community-operators-hdpbg\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: E1121 20:09:03.594894 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 20:09:04.09486802 +0000 UTC m=+149.281053064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tlqhq" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.615500 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.645516 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8c9mf"] Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.646598 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.654007 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdw9p\" (UniqueName: \"kubernetes.io/projected/f364beec-bdfd-41b4-9562-26e8a0a06e27-kube-api-access-tdw9p\") pod \"community-operators-hdpbg\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.654014 4727 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.654128 4727 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-21T20:09:03.654089336Z","Handler":null,"Name":""} Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.673071 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.673774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-catalog-content\") pod \"certified-operators-8c9mf\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.673809 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl8r6\" (UniqueName: \"kubernetes.io/projected/489bab79-05a2-4b26-afe2-6fcc468e4e97-kube-api-access-pl8r6\") pod \"certified-operators-8c9mf\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.673829 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-utilities\") pod \"certified-operators-8c9mf\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.673904 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8c9mf"] Nov 21 20:09:03 crc kubenswrapper[4727]: E1121 20:09:03.677098 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 20:09:04.174136759 +0000 UTC m=+149.360321803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.708368 4727 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.708415 4727 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.776584 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.776857 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-catalog-content\") pod \"certified-operators-8c9mf\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.776948 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl8r6\" (UniqueName: \"kubernetes.io/projected/489bab79-05a2-4b26-afe2-6fcc468e4e97-kube-api-access-pl8r6\") pod \"certified-operators-8c9mf\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.777036 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-utilities\") pod \"certified-operators-8c9mf\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.777939 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-utilities\") pod \"certified-operators-8c9mf\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.778553 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-catalog-content\") pod \"certified-operators-8c9mf\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.813179 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.813228 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.825932 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl8r6\" (UniqueName: \"kubernetes.io/projected/489bab79-05a2-4b26-afe2-6fcc468e4e97-kube-api-access-pl8r6\") pod \"certified-operators-8c9mf\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.845183 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.920384 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tlqhq\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.932429 4727 patch_prober.go:28] interesting pod/console-operator-58897d9998-xpjqs container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.932481 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xpjqs" podUID="8a65a43f-bd75-428b-8709-73364367d020" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.976192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.980188 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e7408be542960d8031b21e535085f0406df9f52defb3844401854e788456d3bc"} Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.980227 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5a1b51ddf99674ceee26438c9208724fca5697e11b45a7b218e4a84cb983e2e7"} Nov 21 20:09:03 crc kubenswrapper[4727]: I1121 20:09:03.983469 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.014310 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6sphz" event={"ID":"19075424-77c6-47ff-93fa-d54902e69f7c","Type":"ContainerStarted","Data":"7cfea0f093200497b293e167423170b5ce5e1b41f4e9ee2bf505d489a37cdbf5"} Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.014389 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6sphz" event={"ID":"19075424-77c6-47ff-93fa-d54902e69f7c","Type":"ContainerStarted","Data":"72a17cdbf2a00d991cfb3be071700131ec545005eeef2d6f6eae608bdea743d1"} Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.032321 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e206db76a79bb2ce001e322257f0f783125d3067aeb5048e175e96e06c1a9f5c"} Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.032736 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.032833 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.035058 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.038372 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.038718 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.041431 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.055623 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.061410 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:04 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:04 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:04 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.061461 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.064344 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a1f6e332ff4817a8ac173f338b8f76d74eacebaa6724c85856f1f0b7b18b08e3"} Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.066732 4727 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fn2j6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.066764 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" podUID="42408e00-5bcd-4405-82fe-851c9b62b149" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.085339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/473b8084-b901-4f0b-9aef-74e59805f083-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"473b8084-b901-4f0b-9aef-74e59805f083\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.085575 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/473b8084-b901-4f0b-9aef-74e59805f083-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"473b8084-b901-4f0b-9aef-74e59805f083\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.178979 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.186759 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/473b8084-b901-4f0b-9aef-74e59805f083-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"473b8084-b901-4f0b-9aef-74e59805f083\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.186831 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/473b8084-b901-4f0b-9aef-74e59805f083-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"473b8084-b901-4f0b-9aef-74e59805f083\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.186970 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/473b8084-b901-4f0b-9aef-74e59805f083-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"473b8084-b901-4f0b-9aef-74e59805f083\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.203457 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ddqc7"] Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.224547 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/473b8084-b901-4f0b-9aef-74e59805f083-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"473b8084-b901-4f0b-9aef-74e59805f083\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.323998 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8c9mf"] Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.410690 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.454340 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6kxgt"] Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.574267 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tlqhq"] Nov 21 20:09:04 crc kubenswrapper[4727]: W1121 20:09:04.614830 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36c9326c_5a9b_4e19_a0a7_047289e45c01.slice/crio-4707057b6c765dabfd24dc2bcab297ebb1ce74edb5c77511c78d3c1f09b0b8da WatchSource:0}: Error finding container 4707057b6c765dabfd24dc2bcab297ebb1ce74edb5c77511c78d3c1f09b0b8da: Status 404 returned error can't find the container with id 4707057b6c765dabfd24dc2bcab297ebb1ce74edb5c77511c78d3c1f09b0b8da Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.653876 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hdpbg"] Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.744504 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 20:09:04 crc kubenswrapper[4727]: W1121 20:09:04.828586 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod473b8084_b901_4f0b_9aef_74e59805f083.slice/crio-1038fccea5445d3b95a1c6fd155226908ed2eec7f4a2f1a6e4605fcf802debab WatchSource:0}: Error finding container 1038fccea5445d3b95a1c6fd155226908ed2eec7f4a2f1a6e4605fcf802debab: Status 404 returned error can't find the container with id 1038fccea5445d3b95a1c6fd155226908ed2eec7f4a2f1a6e4605fcf802debab Nov 21 20:09:04 crc kubenswrapper[4727]: I1121 20:09:04.840376 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:04.998207 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-config-volume\") pod \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:04.998301 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlw7g\" (UniqueName: \"kubernetes.io/projected/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-kube-api-access-jlw7g\") pod \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:04.998334 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-secret-volume\") pod \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\" (UID: \"d9f7a69e-e7d1-4048-8263-52cfefbc90d5\") " Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.000123 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-config-volume" (OuterVolumeSpecName: "config-volume") pod "d9f7a69e-e7d1-4048-8263-52cfefbc90d5" (UID: "d9f7a69e-e7d1-4048-8263-52cfefbc90d5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.005666 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d9f7a69e-e7d1-4048-8263-52cfefbc90d5" (UID: "d9f7a69e-e7d1-4048-8263-52cfefbc90d5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.005716 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:05 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:05 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:05 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.005770 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.010297 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-kube-api-access-jlw7g" (OuterVolumeSpecName: "kube-api-access-jlw7g") pod "d9f7a69e-e7d1-4048-8263-52cfefbc90d5" (UID: "d9f7a69e-e7d1-4048-8263-52cfefbc90d5"). InnerVolumeSpecName "kube-api-access-jlw7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.082280 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6sphz" event={"ID":"19075424-77c6-47ff-93fa-d54902e69f7c","Type":"ContainerStarted","Data":"24625216d040c5399f8e4f357acda20a5e99adddd41657666c6b994540fe2969"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.084299 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"473b8084-b901-4f0b-9aef-74e59805f083","Type":"ContainerStarted","Data":"1038fccea5445d3b95a1c6fd155226908ed2eec7f4a2f1a6e4605fcf802debab"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.100729 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlw7g\" (UniqueName: \"kubernetes.io/projected/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-kube-api-access-jlw7g\") on node \"crc\" DevicePath \"\"" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.100762 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.100772 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f7a69e-e7d1-4048-8263-52cfefbc90d5-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.101541 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdpbg" event={"ID":"f364beec-bdfd-41b4-9562-26e8a0a06e27","Type":"ContainerStarted","Data":"137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.101599 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdpbg" event={"ID":"f364beec-bdfd-41b4-9562-26e8a0a06e27","Type":"ContainerStarted","Data":"dab348cfa1e3dd645e07e8fe28174e356919f7dbedde4d45f3ab2d17934cc7b3"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.106392 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.114428 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f0fe79460198fee10fcf801e62f7ed9e9bb1e38715e440d883c69d566fd39020"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.122896 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6sphz" podStartSLOduration=12.122879802 podStartE2EDuration="12.122879802s" podCreationTimestamp="2025-11-21 20:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:05.121314816 +0000 UTC m=+150.307499860" watchObservedRunningTime="2025-11-21 20:09:05.122879802 +0000 UTC m=+150.309064846" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.130089 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ed151411183a979d0e0f855dce50e7170e9e5f77ca98486d8e2de2595cfbea31"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.143783 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" event={"ID":"36c9326c-5a9b-4e19-a0a7-047289e45c01","Type":"ContainerStarted","Data":"0aa40116b6d28e2aa93bd5a4026e59a68a715fb45ef82a9a3705d6776243593e"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.144136 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" event={"ID":"36c9326c-5a9b-4e19-a0a7-047289e45c01","Type":"ContainerStarted","Data":"4707057b6c765dabfd24dc2bcab297ebb1ce74edb5c77511c78d3c1f09b0b8da"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.145137 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.151306 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vbzh7" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.173693 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" event={"ID":"d9f7a69e-e7d1-4048-8263-52cfefbc90d5","Type":"ContainerDied","Data":"1767de4addd8420b70220e22b8b10ecb452a152499316c1d305c99a5ef9098a6"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.174142 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1767de4addd8420b70220e22b8b10ecb452a152499316c1d305c99a5ef9098a6" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.174228 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.196232 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerID="10d28d6fb66add7253d46853051cbdcbd00d647bee7099cc3d348d1e337cec09" exitCode=0 Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.196625 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddqc7" event={"ID":"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d","Type":"ContainerDied","Data":"10d28d6fb66add7253d46853051cbdcbd00d647bee7099cc3d348d1e337cec09"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.196734 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddqc7" event={"ID":"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d","Type":"ContainerStarted","Data":"a452681e54fd32f0a2c011391e53da981feff77fc7aae36175d1095f3b7220eb"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.218314 4727 generic.go:334] "Generic (PLEG): container finished" podID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerID="d2e56e0d4a3b70d4adf78ba2c3016dd9061b7c2c75907b18295aec269743e3a4" exitCode=0 Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.218658 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c9mf" event={"ID":"489bab79-05a2-4b26-afe2-6fcc468e4e97","Type":"ContainerDied","Data":"d2e56e0d4a3b70d4adf78ba2c3016dd9061b7c2c75907b18295aec269743e3a4"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.218797 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c9mf" event={"ID":"489bab79-05a2-4b26-afe2-6fcc468e4e97","Type":"ContainerStarted","Data":"b670a6aa1c8eba06142bc924062e9f234bc7b625a10facf3698d17984ac76936"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.230569 4727 generic.go:334] "Generic (PLEG): container finished" podID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerID="6db4863caad19f090589f6239baa69b5ae46c4949f5e63de2437efa1b4b73183" exitCode=0 Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.231520 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jhstm"] Nov 21 20:09:05 crc kubenswrapper[4727]: E1121 20:09:05.231880 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f7a69e-e7d1-4048-8263-52cfefbc90d5" containerName="collect-profiles" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.231899 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f7a69e-e7d1-4048-8263-52cfefbc90d5" containerName="collect-profiles" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.232088 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f7a69e-e7d1-4048-8263-52cfefbc90d5" containerName="collect-profiles" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.233118 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kxgt" event={"ID":"8bcc1896-81fc-4fce-bd16-aa2bc5350617","Type":"ContainerDied","Data":"6db4863caad19f090589f6239baa69b5ae46c4949f5e63de2437efa1b4b73183"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.233155 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kxgt" event={"ID":"8bcc1896-81fc-4fce-bd16-aa2bc5350617","Type":"ContainerStarted","Data":"a273d4aa1fb34135743936893565df0a66b20427813c89645935ae4ce06e5ad3"} Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.233264 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.235934 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.246025 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" podStartSLOduration=131.245997475 podStartE2EDuration="2m11.245997475s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:05.222632775 +0000 UTC m=+150.408817819" watchObservedRunningTime="2025-11-21 20:09:05.245997475 +0000 UTC m=+150.432182529" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.266997 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhstm"] Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.404528 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-catalog-content\") pod \"redhat-marketplace-jhstm\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.404635 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-utilities\") pod \"redhat-marketplace-jhstm\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.404675 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g46js\" (UniqueName: \"kubernetes.io/projected/0da39011-6adf-4e8c-81f1-7074d0a7e97b-kube-api-access-g46js\") pod \"redhat-marketplace-jhstm\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.506405 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g46js\" (UniqueName: \"kubernetes.io/projected/0da39011-6adf-4e8c-81f1-7074d0a7e97b-kube-api-access-g46js\") pod \"redhat-marketplace-jhstm\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.506500 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-catalog-content\") pod \"redhat-marketplace-jhstm\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.506624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-utilities\") pod \"redhat-marketplace-jhstm\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.507224 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-catalog-content\") pod \"redhat-marketplace-jhstm\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.507380 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-utilities\") pod \"redhat-marketplace-jhstm\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.507555 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.537582 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g46js\" (UniqueName: \"kubernetes.io/projected/0da39011-6adf-4e8c-81f1-7074d0a7e97b-kube-api-access-g46js\") pod \"redhat-marketplace-jhstm\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.549779 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.616424 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sqllm"] Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.617622 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.644745 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqllm"] Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.709063 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-utilities\") pod \"redhat-marketplace-sqllm\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.709144 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twn2m\" (UniqueName: \"kubernetes.io/projected/b1dab950-5149-4290-adfe-1924a0d5f745-kube-api-access-twn2m\") pod \"redhat-marketplace-sqllm\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.709184 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-catalog-content\") pod \"redhat-marketplace-sqllm\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.811559 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-catalog-content\") pod \"redhat-marketplace-sqllm\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.811655 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-utilities\") pod \"redhat-marketplace-sqllm\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.811704 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twn2m\" (UniqueName: \"kubernetes.io/projected/b1dab950-5149-4290-adfe-1924a0d5f745-kube-api-access-twn2m\") pod \"redhat-marketplace-sqllm\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.815079 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-catalog-content\") pod \"redhat-marketplace-sqllm\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.815502 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-utilities\") pod \"redhat-marketplace-sqllm\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.818601 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhstm"] Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.833822 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twn2m\" (UniqueName: \"kubernetes.io/projected/b1dab950-5149-4290-adfe-1924a0d5f745-kube-api-access-twn2m\") pod \"redhat-marketplace-sqllm\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:05 crc kubenswrapper[4727]: I1121 20:09:05.945234 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.002413 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.014173 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:06 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:06 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:06 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.014341 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.057103 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.057608 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.066537 4727 patch_prober.go:28] interesting pod/console-f9d7485db-fj2k4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.066588 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fj2k4" podUID="dacc1ea4-7062-46ad-a784-70537e92dc51" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.181629 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzpzh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.181678 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzpzh" podUID="3f3303e9-97ef-4e9b-9dc0-076066682c43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.182035 4727 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzpzh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.182057 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nzpzh" podUID="3f3303e9-97ef-4e9b-9dc0-076066682c43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.249046 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nz2zh"] Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.250080 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.261729 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.262900 4727 generic.go:334] "Generic (PLEG): container finished" podID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerID="137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8" exitCode=0 Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.262995 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdpbg" event={"ID":"f364beec-bdfd-41b4-9562-26e8a0a06e27","Type":"ContainerDied","Data":"137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8"} Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.272500 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.272548 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.283992 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nz2zh"] Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.294460 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhstm" event={"ID":"0da39011-6adf-4e8c-81f1-7074d0a7e97b","Type":"ContainerDied","Data":"e7f27c3a6f029ba03160cef9f9f22c46194db68b89ede3a33e4f9619b9c5e965"} Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.292708 4727 generic.go:334] "Generic (PLEG): container finished" podID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerID="e7f27c3a6f029ba03160cef9f9f22c46194db68b89ede3a33e4f9619b9c5e965" exitCode=0 Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.295967 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhstm" event={"ID":"0da39011-6adf-4e8c-81f1-7074d0a7e97b","Type":"ContainerStarted","Data":"2c4fc03b9ce499dd11f021d13632f117918af37eefe03e1ece820772c43adc75"} Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.296638 4727 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4855z container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]log ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]etcd ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/generic-apiserver-start-informers ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/max-in-flight-filter ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 21 20:09:06 crc kubenswrapper[4727]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/project.openshift.io-projectcache ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-startinformers ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 21 20:09:06 crc kubenswrapper[4727]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 21 20:09:06 crc kubenswrapper[4727]: livez check failed Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.297168 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4855z" podUID="b49f037a-e7ec-45ef-846b-79ab549adb90" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.302750 4727 generic.go:334] "Generic (PLEG): container finished" podID="473b8084-b901-4f0b-9aef-74e59805f083" containerID="35e827498c1f251cdb0c6d7e040f46e3e549f0da266bdbb409134b892c472129" exitCode=0 Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.302945 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"473b8084-b901-4f0b-9aef-74e59805f083","Type":"ContainerDied","Data":"35e827498c1f251cdb0c6d7e040f46e3e549f0da266bdbb409134b892c472129"} Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.334758 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqllm"] Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.427069 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-catalog-content\") pod \"redhat-operators-nz2zh\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.427303 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8b5b\" (UniqueName: \"kubernetes.io/projected/55fe4653-9eee-4a78-8d87-f368aca698b6-kube-api-access-x8b5b\") pod \"redhat-operators-nz2zh\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.427349 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-utilities\") pod \"redhat-operators-nz2zh\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.493185 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.493269 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.513239 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.531680 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8b5b\" (UniqueName: \"kubernetes.io/projected/55fe4653-9eee-4a78-8d87-f368aca698b6-kube-api-access-x8b5b\") pod \"redhat-operators-nz2zh\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.531798 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-utilities\") pod \"redhat-operators-nz2zh\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.532000 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-catalog-content\") pod \"redhat-operators-nz2zh\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.532526 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-utilities\") pod \"redhat-operators-nz2zh\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.532602 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-catalog-content\") pod \"redhat-operators-nz2zh\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.558844 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.560068 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.564623 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.566147 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.580777 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.590857 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8b5b\" (UniqueName: \"kubernetes.io/projected/55fe4653-9eee-4a78-8d87-f368aca698b6-kube-api-access-x8b5b\") pod \"redhat-operators-nz2zh\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.610588 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.621672 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k74vs"] Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.623002 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.635117 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3e21f0ea-ebbf-4575-88cb-cd19fccb688f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.635519 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3e21f0ea-ebbf-4575-88cb-cd19fccb688f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.637380 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k74vs"] Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.737198 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-utilities\") pod \"redhat-operators-k74vs\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.737715 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3e21f0ea-ebbf-4575-88cb-cd19fccb688f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.737767 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtcmz\" (UniqueName: \"kubernetes.io/projected/bc237982-4faf-431c-8d35-3cc0801bfef7-kube-api-access-gtcmz\") pod \"redhat-operators-k74vs\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.737796 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3e21f0ea-ebbf-4575-88cb-cd19fccb688f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.737826 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-catalog-content\") pod \"redhat-operators-k74vs\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.737939 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3e21f0ea-ebbf-4575-88cb-cd19fccb688f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.765090 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3e21f0ea-ebbf-4575-88cb-cd19fccb688f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.838866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-utilities\") pod \"redhat-operators-k74vs\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.838939 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtcmz\" (UniqueName: \"kubernetes.io/projected/bc237982-4faf-431c-8d35-3cc0801bfef7-kube-api-access-gtcmz\") pod \"redhat-operators-k74vs\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.838976 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-catalog-content\") pod \"redhat-operators-k74vs\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.839640 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-utilities\") pod \"redhat-operators-k74vs\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.841222 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-catalog-content\") pod \"redhat-operators-k74vs\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.858616 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtcmz\" (UniqueName: \"kubernetes.io/projected/bc237982-4faf-431c-8d35-3cc0801bfef7-kube-api-access-gtcmz\") pod \"redhat-operators-k74vs\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.915867 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nz2zh"] Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.924634 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 20:09:06 crc kubenswrapper[4727]: W1121 20:09:06.929082 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55fe4653_9eee_4a78_8d87_f368aca698b6.slice/crio-c475a96626aaba17153d94cfca8b2242a30a418431dea9122e64cbd2bb962094 WatchSource:0}: Error finding container c475a96626aaba17153d94cfca8b2242a30a418431dea9122e64cbd2bb962094: Status 404 returned error can't find the container with id c475a96626aaba17153d94cfca8b2242a30a418431dea9122e64cbd2bb962094 Nov 21 20:09:06 crc kubenswrapper[4727]: I1121 20:09:06.952708 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.013160 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:07 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:07 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:07 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.013245 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.031711 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.092358 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xpjqs" Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.340439 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.343578 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nz2zh" event={"ID":"55fe4653-9eee-4a78-8d87-f368aca698b6","Type":"ContainerStarted","Data":"6d5a383b906f9d6ea7f5a049215a770cf21c083a7ffd9adf0b40a65aa7b2d8a4"} Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.343665 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nz2zh" event={"ID":"55fe4653-9eee-4a78-8d87-f368aca698b6","Type":"ContainerStarted","Data":"c475a96626aaba17153d94cfca8b2242a30a418431dea9122e64cbd2bb962094"} Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.353158 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1dab950-5149-4290-adfe-1924a0d5f745" containerID="efa67c1f81d33c8a059286d06fc7365035e0206cddb8672db278ba45b8be4e02" exitCode=0 Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.353297 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqllm" event={"ID":"b1dab950-5149-4290-adfe-1924a0d5f745","Type":"ContainerDied","Data":"efa67c1f81d33c8a059286d06fc7365035e0206cddb8672db278ba45b8be4e02"} Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.353347 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqllm" event={"ID":"b1dab950-5149-4290-adfe-1924a0d5f745","Type":"ContainerStarted","Data":"1ab0b04b612400678dd37669ff5fbb86e3bba7482ae2927fcf036f56aee51318"} Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.359531 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jbvs8" Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.423174 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k74vs"] Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.776645 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.855688 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/473b8084-b901-4f0b-9aef-74e59805f083-kubelet-dir\") pod \"473b8084-b901-4f0b-9aef-74e59805f083\" (UID: \"473b8084-b901-4f0b-9aef-74e59805f083\") " Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.855769 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/473b8084-b901-4f0b-9aef-74e59805f083-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "473b8084-b901-4f0b-9aef-74e59805f083" (UID: "473b8084-b901-4f0b-9aef-74e59805f083"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.855789 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/473b8084-b901-4f0b-9aef-74e59805f083-kube-api-access\") pod \"473b8084-b901-4f0b-9aef-74e59805f083\" (UID: \"473b8084-b901-4f0b-9aef-74e59805f083\") " Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.856099 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/473b8084-b901-4f0b-9aef-74e59805f083-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.861165 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473b8084-b901-4f0b-9aef-74e59805f083-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "473b8084-b901-4f0b-9aef-74e59805f083" (UID: "473b8084-b901-4f0b-9aef-74e59805f083"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:09:07 crc kubenswrapper[4727]: I1121 20:09:07.957837 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/473b8084-b901-4f0b-9aef-74e59805f083-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.006586 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:08 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:08 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:08 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.006727 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.407384 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e21f0ea-ebbf-4575-88cb-cd19fccb688f","Type":"ContainerStarted","Data":"01e674e90ffcb8233a29e7d312357fba7928bce268f3c0ff082c4f8cbdf6b634"} Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.407435 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e21f0ea-ebbf-4575-88cb-cd19fccb688f","Type":"ContainerStarted","Data":"fac87aca1bfb9bdfa36a4807a598f36942f56e746b3c243d72203b5cf9ce3ee6"} Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.413546 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"473b8084-b901-4f0b-9aef-74e59805f083","Type":"ContainerDied","Data":"1038fccea5445d3b95a1c6fd155226908ed2eec7f4a2f1a6e4605fcf802debab"} Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.413592 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1038fccea5445d3b95a1c6fd155226908ed2eec7f4a2f1a6e4605fcf802debab" Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.413676 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.420524 4727 generic.go:334] "Generic (PLEG): container finished" podID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerID="6d5a383b906f9d6ea7f5a049215a770cf21c083a7ffd9adf0b40a65aa7b2d8a4" exitCode=0 Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.420631 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nz2zh" event={"ID":"55fe4653-9eee-4a78-8d87-f368aca698b6","Type":"ContainerDied","Data":"6d5a383b906f9d6ea7f5a049215a770cf21c083a7ffd9adf0b40a65aa7b2d8a4"} Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.425168 4727 generic.go:334] "Generic (PLEG): container finished" podID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerID="588e206ecd13d2ce0946e7df669373103e27e7c4a375e24320db4f04c9605e33" exitCode=0 Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.425307 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k74vs" event={"ID":"bc237982-4faf-431c-8d35-3cc0801bfef7","Type":"ContainerDied","Data":"588e206ecd13d2ce0946e7df669373103e27e7c4a375e24320db4f04c9605e33"} Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.425377 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k74vs" event={"ID":"bc237982-4faf-431c-8d35-3cc0801bfef7","Type":"ContainerStarted","Data":"ceff7a3acec56fd1145c9ba3ea0ea9a4540695de1199166554450e7622a4cac6"} Nov 21 20:09:08 crc kubenswrapper[4727]: I1121 20:09:08.437834 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.437813897 podStartE2EDuration="2.437813897s" podCreationTimestamp="2025-11-21 20:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:08.436145698 +0000 UTC m=+153.622330742" watchObservedRunningTime="2025-11-21 20:09:08.437813897 +0000 UTC m=+153.623998941" Nov 21 20:09:09 crc kubenswrapper[4727]: I1121 20:09:09.006537 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:09 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:09 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:09 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:09 crc kubenswrapper[4727]: I1121 20:09:09.006611 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:09 crc kubenswrapper[4727]: I1121 20:09:09.435457 4727 generic.go:334] "Generic (PLEG): container finished" podID="3e21f0ea-ebbf-4575-88cb-cd19fccb688f" containerID="01e674e90ffcb8233a29e7d312357fba7928bce268f3c0ff082c4f8cbdf6b634" exitCode=0 Nov 21 20:09:09 crc kubenswrapper[4727]: I1121 20:09:09.435555 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e21f0ea-ebbf-4575-88cb-cd19fccb688f","Type":"ContainerDied","Data":"01e674e90ffcb8233a29e7d312357fba7928bce268f3c0ff082c4f8cbdf6b634"} Nov 21 20:09:10 crc kubenswrapper[4727]: I1121 20:09:10.004944 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:10 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:10 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:10 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:10 crc kubenswrapper[4727]: I1121 20:09:10.005016 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:11 crc kubenswrapper[4727]: I1121 20:09:11.008654 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:11 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:11 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:11 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:11 crc kubenswrapper[4727]: I1121 20:09:11.008778 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:11 crc kubenswrapper[4727]: I1121 20:09:11.277816 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:09:11 crc kubenswrapper[4727]: I1121 20:09:11.282674 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4855z" Nov 21 20:09:12 crc kubenswrapper[4727]: I1121 20:09:12.005611 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:12 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:12 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:12 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:12 crc kubenswrapper[4727]: I1121 20:09:12.006192 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:12 crc kubenswrapper[4727]: I1121 20:09:12.123459 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lvhwh" Nov 21 20:09:13 crc kubenswrapper[4727]: I1121 20:09:13.005631 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:13 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:13 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:13 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:13 crc kubenswrapper[4727]: I1121 20:09:13.005704 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:13 crc kubenswrapper[4727]: I1121 20:09:13.335906 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:09:13 crc kubenswrapper[4727]: I1121 20:09:13.336057 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:09:14 crc kubenswrapper[4727]: I1121 20:09:14.006831 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:14 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:14 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:14 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:14 crc kubenswrapper[4727]: I1121 20:09:14.006896 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:15 crc kubenswrapper[4727]: I1121 20:09:15.005489 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:15 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:15 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:15 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:15 crc kubenswrapper[4727]: I1121 20:09:15.005863 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:16 crc kubenswrapper[4727]: I1121 20:09:16.005987 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:16 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:16 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:16 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:16 crc kubenswrapper[4727]: I1121 20:09:16.006051 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:16 crc kubenswrapper[4727]: I1121 20:09:16.058525 4727 patch_prober.go:28] interesting pod/console-f9d7485db-fj2k4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 21 20:09:16 crc kubenswrapper[4727]: I1121 20:09:16.058596 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fj2k4" podUID="dacc1ea4-7062-46ad-a784-70537e92dc51" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 21 20:09:16 crc kubenswrapper[4727]: I1121 20:09:16.186510 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-nzpzh" Nov 21 20:09:16 crc kubenswrapper[4727]: I1121 20:09:16.615890 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:09:16 crc kubenswrapper[4727]: I1121 20:09:16.637017 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f8318f96-4402-4567-a432-6cf3897e218d-metrics-certs\") pod \"network-metrics-daemon-rs9rv\" (UID: \"f8318f96-4402-4567-a432-6cf3897e218d\") " pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:09:16 crc kubenswrapper[4727]: I1121 20:09:16.828236 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rs9rv" Nov 21 20:09:17 crc kubenswrapper[4727]: I1121 20:09:17.005839 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:17 crc kubenswrapper[4727]: [-]has-synced failed: reason withheld Nov 21 20:09:17 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:17 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:17 crc kubenswrapper[4727]: I1121 20:09:17.005922 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:18 crc kubenswrapper[4727]: I1121 20:09:18.007526 4727 patch_prober.go:28] interesting pod/router-default-5444994796-dw7dg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 20:09:18 crc kubenswrapper[4727]: [+]has-synced ok Nov 21 20:09:18 crc kubenswrapper[4727]: [+]process-running ok Nov 21 20:09:18 crc kubenswrapper[4727]: healthz check failed Nov 21 20:09:18 crc kubenswrapper[4727]: I1121 20:09:18.007593 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dw7dg" podUID="e3860251-af35-4f12-81ce-91855c94d8c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.005553 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.008625 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dw7dg" Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.329984 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.459149 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kubelet-dir\") pod \"3e21f0ea-ebbf-4575-88cb-cd19fccb688f\" (UID: \"3e21f0ea-ebbf-4575-88cb-cd19fccb688f\") " Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.459260 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kube-api-access\") pod \"3e21f0ea-ebbf-4575-88cb-cd19fccb688f\" (UID: \"3e21f0ea-ebbf-4575-88cb-cd19fccb688f\") " Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.459284 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3e21f0ea-ebbf-4575-88cb-cd19fccb688f" (UID: "3e21f0ea-ebbf-4575-88cb-cd19fccb688f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.459657 4727 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.468689 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3e21f0ea-ebbf-4575-88cb-cd19fccb688f" (UID: "3e21f0ea-ebbf-4575-88cb-cd19fccb688f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.526303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e21f0ea-ebbf-4575-88cb-cd19fccb688f","Type":"ContainerDied","Data":"fac87aca1bfb9bdfa36a4807a598f36942f56e746b3c243d72203b5cf9ce3ee6"} Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.526368 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fac87aca1bfb9bdfa36a4807a598f36942f56e746b3c243d72203b5cf9ce3ee6" Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.526467 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 20:09:19 crc kubenswrapper[4727]: I1121 20:09:19.560749 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e21f0ea-ebbf-4575-88cb-cd19fccb688f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 20:09:22 crc kubenswrapper[4727]: I1121 20:09:22.548813 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-pv5wp_200ac8c1-4bf1-4356-8091-9279fc08523f/cluster-samples-operator/0.log" Nov 21 20:09:22 crc kubenswrapper[4727]: I1121 20:09:22.549330 4727 generic.go:334] "Generic (PLEG): container finished" podID="200ac8c1-4bf1-4356-8091-9279fc08523f" containerID="33e690bc8fa5200bd4358fa8bb537295e60e2025b8baee0c8aface1563d719ff" exitCode=2 Nov 21 20:09:22 crc kubenswrapper[4727]: I1121 20:09:22.549389 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" event={"ID":"200ac8c1-4bf1-4356-8091-9279fc08523f","Type":"ContainerDied","Data":"33e690bc8fa5200bd4358fa8bb537295e60e2025b8baee0c8aface1563d719ff"} Nov 21 20:09:22 crc kubenswrapper[4727]: I1121 20:09:22.550339 4727 scope.go:117] "RemoveContainer" containerID="33e690bc8fa5200bd4358fa8bb537295e60e2025b8baee0c8aface1563d719ff" Nov 21 20:09:23 crc kubenswrapper[4727]: I1121 20:09:23.438143 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rs9rv"] Nov 21 20:09:24 crc kubenswrapper[4727]: I1121 20:09:24.187085 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:09:26 crc kubenswrapper[4727]: I1121 20:09:26.063273 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:09:26 crc kubenswrapper[4727]: I1121 20:09:26.071782 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:09:36 crc kubenswrapper[4727]: I1121 20:09:36.040826 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wmvqq" Nov 21 20:09:39 crc kubenswrapper[4727]: E1121 20:09:39.569412 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 21 20:09:39 crc kubenswrapper[4727]: E1121 20:09:39.569866 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8b5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nz2zh_openshift-marketplace(55fe4653-9eee-4a78-8d87-f368aca698b6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 20:09:39 crc kubenswrapper[4727]: E1121 20:09:39.574223 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nz2zh" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" Nov 21 20:09:42 crc kubenswrapper[4727]: I1121 20:09:42.897355 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 20:09:42 crc kubenswrapper[4727]: E1121 20:09:42.921826 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-nz2zh" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" Nov 21 20:09:42 crc kubenswrapper[4727]: W1121 20:09:42.933449 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8318f96_4402_4567_a432_6cf3897e218d.slice/crio-808113a1a9524cfbd1b08eb401eb4a651bff1208f7dbfe0e12acd2805c7c631e WatchSource:0}: Error finding container 808113a1a9524cfbd1b08eb401eb4a651bff1208f7dbfe0e12acd2805c7c631e: Status 404 returned error can't find the container with id 808113a1a9524cfbd1b08eb401eb4a651bff1208f7dbfe0e12acd2805c7c631e Nov 21 20:09:42 crc kubenswrapper[4727]: E1121 20:09:42.991932 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 21 20:09:42 crc kubenswrapper[4727]: E1121 20:09:42.992131 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdw9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hdpbg_openshift-marketplace(f364beec-bdfd-41b4-9562-26e8a0a06e27): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 20:09:42 crc kubenswrapper[4727]: E1121 20:09:42.993367 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hdpbg" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" Nov 21 20:09:43 crc kubenswrapper[4727]: E1121 20:09:43.015115 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 21 20:09:43 crc kubenswrapper[4727]: E1121 20:09:43.015340 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbcqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ddqc7_openshift-marketplace(b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 20:09:43 crc kubenswrapper[4727]: E1121 20:09:43.016771 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ddqc7" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" Nov 21 20:09:43 crc kubenswrapper[4727]: E1121 20:09:43.179351 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 21 20:09:43 crc kubenswrapper[4727]: E1121 20:09:43.179520 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g46js,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jhstm_openshift-marketplace(0da39011-6adf-4e8c-81f1-7074d0a7e97b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 20:09:43 crc kubenswrapper[4727]: E1121 20:09:43.180709 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jhstm" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" Nov 21 20:09:43 crc kubenswrapper[4727]: I1121 20:09:43.335187 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:09:43 crc kubenswrapper[4727]: I1121 20:09:43.335265 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:09:43 crc kubenswrapper[4727]: I1121 20:09:43.682994 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" event={"ID":"f8318f96-4402-4567-a432-6cf3897e218d","Type":"ContainerStarted","Data":"808113a1a9524cfbd1b08eb401eb4a651bff1208f7dbfe0e12acd2805c7c631e"} Nov 21 20:09:44 crc kubenswrapper[4727]: E1121 20:09:44.614575 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ddqc7" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" Nov 21 20:09:44 crc kubenswrapper[4727]: E1121 20:09:44.614726 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhstm" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" Nov 21 20:09:44 crc kubenswrapper[4727]: E1121 20:09:44.615770 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hdpbg" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.703750 4727 generic.go:334] "Generic (PLEG): container finished" podID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerID="a878fb77411b2b97cf3168b3b947615aa9dcc32f74bc0668b733d050bd351a4d" exitCode=0 Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.703835 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kxgt" event={"ID":"8bcc1896-81fc-4fce-bd16-aa2bc5350617","Type":"ContainerDied","Data":"a878fb77411b2b97cf3168b3b947615aa9dcc32f74bc0668b733d050bd351a4d"} Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.706789 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" event={"ID":"f8318f96-4402-4567-a432-6cf3897e218d","Type":"ContainerStarted","Data":"b042b7b673689db7053b2421c6298f5fc05f12d85aedce8e18ca339a3798cff5"} Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.707174 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rs9rv" event={"ID":"f8318f96-4402-4567-a432-6cf3897e218d","Type":"ContainerStarted","Data":"37bba0558ff58290bd0c57abee72cb09d109f1c1da09dd1829889c0191a85cc5"} Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.709710 4727 generic.go:334] "Generic (PLEG): container finished" podID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerID="d714fb66d4890e10e52eee8e5f704faf423038cf6b78da19c1447ca803aab966" exitCode=0 Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.709799 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k74vs" event={"ID":"bc237982-4faf-431c-8d35-3cc0801bfef7","Type":"ContainerDied","Data":"d714fb66d4890e10e52eee8e5f704faf423038cf6b78da19c1447ca803aab966"} Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.712195 4727 generic.go:334] "Generic (PLEG): container finished" podID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerID="6af23bb30c79d7f1bdfd634aa5a4357a209e09b125ea28409b9a808f9271f87c" exitCode=0 Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.712249 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c9mf" event={"ID":"489bab79-05a2-4b26-afe2-6fcc468e4e97","Type":"ContainerDied","Data":"6af23bb30c79d7f1bdfd634aa5a4357a209e09b125ea28409b9a808f9271f87c"} Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.716370 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-pv5wp_200ac8c1-4bf1-4356-8091-9279fc08523f/cluster-samples-operator/0.log" Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.716508 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pv5wp" event={"ID":"200ac8c1-4bf1-4356-8091-9279fc08523f","Type":"ContainerStarted","Data":"feddc3a08dadd2b328fda1f2e5b88cadb113f500db14c7d8c98e243876122f93"} Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.720662 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1dab950-5149-4290-adfe-1924a0d5f745" containerID="e40de0053ee79e00fe3eb7a607b71998df3bcdca7d6367968080e6a652737285" exitCode=0 Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.720833 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqllm" event={"ID":"b1dab950-5149-4290-adfe-1924a0d5f745","Type":"ContainerDied","Data":"e40de0053ee79e00fe3eb7a607b71998df3bcdca7d6367968080e6a652737285"} Nov 21 20:09:45 crc kubenswrapper[4727]: I1121 20:09:45.772342 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rs9rv" podStartSLOduration=171.772316169 podStartE2EDuration="2m51.772316169s" podCreationTimestamp="2025-11-21 20:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:09:45.766802588 +0000 UTC m=+190.952987652" watchObservedRunningTime="2025-11-21 20:09:45.772316169 +0000 UTC m=+190.958501223" Nov 21 20:09:46 crc kubenswrapper[4727]: I1121 20:09:46.744561 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c9mf" event={"ID":"489bab79-05a2-4b26-afe2-6fcc468e4e97","Type":"ContainerStarted","Data":"569513e50153584cb50ff16948fe5ee626b37c1c4dd6943284345cd9813f9a37"} Nov 21 20:09:46 crc kubenswrapper[4727]: I1121 20:09:46.751221 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqllm" event={"ID":"b1dab950-5149-4290-adfe-1924a0d5f745","Type":"ContainerStarted","Data":"ab0bd690b5456b39c641fc1714f57dfc4fc2ccc187aef05a1ef63a68ec9fa06e"} Nov 21 20:09:46 crc kubenswrapper[4727]: I1121 20:09:46.753932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kxgt" event={"ID":"8bcc1896-81fc-4fce-bd16-aa2bc5350617","Type":"ContainerStarted","Data":"f64ac89c79a98a60dabd276a475a82308befa98d62ee24100515baf63df7afed"} Nov 21 20:09:46 crc kubenswrapper[4727]: I1121 20:09:46.755726 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k74vs" event={"ID":"bc237982-4faf-431c-8d35-3cc0801bfef7","Type":"ContainerStarted","Data":"121d8e0d914d51a9f81aba672092f8205aa42a62d584ea7d510c43c7328d6be8"} Nov 21 20:09:46 crc kubenswrapper[4727]: I1121 20:09:46.768882 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8c9mf" podStartSLOduration=2.546720077 podStartE2EDuration="43.768855742s" podCreationTimestamp="2025-11-21 20:09:03 +0000 UTC" firstStartedPulling="2025-11-21 20:09:05.222233143 +0000 UTC m=+150.408418187" lastFinishedPulling="2025-11-21 20:09:46.444368808 +0000 UTC m=+191.630553852" observedRunningTime="2025-11-21 20:09:46.765047429 +0000 UTC m=+191.951232483" watchObservedRunningTime="2025-11-21 20:09:46.768855742 +0000 UTC m=+191.955040786" Nov 21 20:09:46 crc kubenswrapper[4727]: I1121 20:09:46.809872 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k74vs" podStartSLOduration=3.095044717 podStartE2EDuration="40.809847951s" podCreationTimestamp="2025-11-21 20:09:06 +0000 UTC" firstStartedPulling="2025-11-21 20:09:08.434497539 +0000 UTC m=+153.620682593" lastFinishedPulling="2025-11-21 20:09:46.149300773 +0000 UTC m=+191.335485827" observedRunningTime="2025-11-21 20:09:46.805807343 +0000 UTC m=+191.991992397" watchObservedRunningTime="2025-11-21 20:09:46.809847951 +0000 UTC m=+191.996032995" Nov 21 20:09:46 crc kubenswrapper[4727]: I1121 20:09:46.810498 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sqllm" podStartSLOduration=2.957352052 podStartE2EDuration="41.810491471s" podCreationTimestamp="2025-11-21 20:09:05 +0000 UTC" firstStartedPulling="2025-11-21 20:09:07.357137882 +0000 UTC m=+152.543322926" lastFinishedPulling="2025-11-21 20:09:46.210277291 +0000 UTC m=+191.396462345" observedRunningTime="2025-11-21 20:09:46.787705499 +0000 UTC m=+191.973890543" watchObservedRunningTime="2025-11-21 20:09:46.810491471 +0000 UTC m=+191.996676525" Nov 21 20:09:46 crc kubenswrapper[4727]: I1121 20:09:46.828879 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6kxgt" podStartSLOduration=2.880234917 podStartE2EDuration="43.828857942s" podCreationTimestamp="2025-11-21 20:09:03 +0000 UTC" firstStartedPulling="2025-11-21 20:09:05.232023482 +0000 UTC m=+150.418208516" lastFinishedPulling="2025-11-21 20:09:46.180646507 +0000 UTC m=+191.366831541" observedRunningTime="2025-11-21 20:09:46.82673002 +0000 UTC m=+192.012915064" watchObservedRunningTime="2025-11-21 20:09:46.828857942 +0000 UTC m=+192.015042986" Nov 21 20:09:47 crc kubenswrapper[4727]: I1121 20:09:47.032635 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:47 crc kubenswrapper[4727]: I1121 20:09:47.032905 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:48 crc kubenswrapper[4727]: I1121 20:09:48.177903 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k74vs" podUID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerName="registry-server" probeResult="failure" output=< Nov 21 20:09:48 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 20:09:48 crc kubenswrapper[4727]: > Nov 21 20:09:53 crc kubenswrapper[4727]: I1121 20:09:53.616386 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:53 crc kubenswrapper[4727]: I1121 20:09:53.617042 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:53 crc kubenswrapper[4727]: I1121 20:09:53.663634 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:53 crc kubenswrapper[4727]: I1121 20:09:53.856310 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:09:53 crc kubenswrapper[4727]: I1121 20:09:53.976762 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:53 crc kubenswrapper[4727]: I1121 20:09:53.976812 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:54 crc kubenswrapper[4727]: I1121 20:09:54.011697 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:54 crc kubenswrapper[4727]: I1121 20:09:54.883081 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:09:55 crc kubenswrapper[4727]: I1121 20:09:55.163235 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zqbhr"] Nov 21 20:09:55 crc kubenswrapper[4727]: I1121 20:09:55.946135 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:55 crc kubenswrapper[4727]: I1121 20:09:55.946773 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:55 crc kubenswrapper[4727]: I1121 20:09:55.990084 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:56 crc kubenswrapper[4727]: I1121 20:09:56.830678 4727 generic.go:334] "Generic (PLEG): container finished" podID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerID="c3ae56203c760b59ccb7a3aaffc351c6f8017d89aa7371e8480a058b9e28259f" exitCode=0 Nov 21 20:09:56 crc kubenswrapper[4727]: I1121 20:09:56.830725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nz2zh" event={"ID":"55fe4653-9eee-4a78-8d87-f368aca698b6","Type":"ContainerDied","Data":"c3ae56203c760b59ccb7a3aaffc351c6f8017d89aa7371e8480a058b9e28259f"} Nov 21 20:09:56 crc kubenswrapper[4727]: I1121 20:09:56.868663 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:09:57 crc kubenswrapper[4727]: I1121 20:09:57.085550 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:57 crc kubenswrapper[4727]: I1121 20:09:57.138536 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:09:57 crc kubenswrapper[4727]: I1121 20:09:57.292290 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8c9mf"] Nov 21 20:09:57 crc kubenswrapper[4727]: I1121 20:09:57.292596 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8c9mf" podUID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerName="registry-server" containerID="cri-o://569513e50153584cb50ff16948fe5ee626b37c1c4dd6943284345cd9813f9a37" gracePeriod=2 Nov 21 20:09:58 crc kubenswrapper[4727]: I1121 20:09:58.691144 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqllm"] Nov 21 20:09:58 crc kubenswrapper[4727]: I1121 20:09:58.847546 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sqllm" podUID="b1dab950-5149-4290-adfe-1924a0d5f745" containerName="registry-server" containerID="cri-o://ab0bd690b5456b39c641fc1714f57dfc4fc2ccc187aef05a1ef63a68ec9fa06e" gracePeriod=2 Nov 21 20:09:59 crc kubenswrapper[4727]: I1121 20:09:59.690583 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k74vs"] Nov 21 20:09:59 crc kubenswrapper[4727]: I1121 20:09:59.691341 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k74vs" podUID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerName="registry-server" containerID="cri-o://121d8e0d914d51a9f81aba672092f8205aa42a62d584ea7d510c43c7328d6be8" gracePeriod=2 Nov 21 20:09:59 crc kubenswrapper[4727]: I1121 20:09:59.855644 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1dab950-5149-4290-adfe-1924a0d5f745" containerID="ab0bd690b5456b39c641fc1714f57dfc4fc2ccc187aef05a1ef63a68ec9fa06e" exitCode=0 Nov 21 20:09:59 crc kubenswrapper[4727]: I1121 20:09:59.855744 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqllm" event={"ID":"b1dab950-5149-4290-adfe-1924a0d5f745","Type":"ContainerDied","Data":"ab0bd690b5456b39c641fc1714f57dfc4fc2ccc187aef05a1ef63a68ec9fa06e"} Nov 21 20:09:59 crc kubenswrapper[4727]: I1121 20:09:59.860346 4727 generic.go:334] "Generic (PLEG): container finished" podID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerID="569513e50153584cb50ff16948fe5ee626b37c1c4dd6943284345cd9813f9a37" exitCode=0 Nov 21 20:09:59 crc kubenswrapper[4727]: I1121 20:09:59.860406 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c9mf" event={"ID":"489bab79-05a2-4b26-afe2-6fcc468e4e97","Type":"ContainerDied","Data":"569513e50153584cb50ff16948fe5ee626b37c1c4dd6943284345cd9813f9a37"} Nov 21 20:09:59 crc kubenswrapper[4727]: I1121 20:09:59.950172 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.106330 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-utilities\") pod \"489bab79-05a2-4b26-afe2-6fcc468e4e97\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.107203 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl8r6\" (UniqueName: \"kubernetes.io/projected/489bab79-05a2-4b26-afe2-6fcc468e4e97-kube-api-access-pl8r6\") pod \"489bab79-05a2-4b26-afe2-6fcc468e4e97\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.107400 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-catalog-content\") pod \"489bab79-05a2-4b26-afe2-6fcc468e4e97\" (UID: \"489bab79-05a2-4b26-afe2-6fcc468e4e97\") " Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.108593 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-utilities" (OuterVolumeSpecName: "utilities") pod "489bab79-05a2-4b26-afe2-6fcc468e4e97" (UID: "489bab79-05a2-4b26-afe2-6fcc468e4e97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.113557 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489bab79-05a2-4b26-afe2-6fcc468e4e97-kube-api-access-pl8r6" (OuterVolumeSpecName: "kube-api-access-pl8r6") pod "489bab79-05a2-4b26-afe2-6fcc468e4e97" (UID: "489bab79-05a2-4b26-afe2-6fcc468e4e97"). InnerVolumeSpecName "kube-api-access-pl8r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.163095 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "489bab79-05a2-4b26-afe2-6fcc468e4e97" (UID: "489bab79-05a2-4b26-afe2-6fcc468e4e97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.209300 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.209612 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489bab79-05a2-4b26-afe2-6fcc468e4e97-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.209736 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl8r6\" (UniqueName: \"kubernetes.io/projected/489bab79-05a2-4b26-afe2-6fcc468e4e97-kube-api-access-pl8r6\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.325084 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.514099 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twn2m\" (UniqueName: \"kubernetes.io/projected/b1dab950-5149-4290-adfe-1924a0d5f745-kube-api-access-twn2m\") pod \"b1dab950-5149-4290-adfe-1924a0d5f745\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.514183 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-utilities\") pod \"b1dab950-5149-4290-adfe-1924a0d5f745\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.514272 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-catalog-content\") pod \"b1dab950-5149-4290-adfe-1924a0d5f745\" (UID: \"b1dab950-5149-4290-adfe-1924a0d5f745\") " Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.515843 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-utilities" (OuterVolumeSpecName: "utilities") pod "b1dab950-5149-4290-adfe-1924a0d5f745" (UID: "b1dab950-5149-4290-adfe-1924a0d5f745"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.520696 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1dab950-5149-4290-adfe-1924a0d5f745-kube-api-access-twn2m" (OuterVolumeSpecName: "kube-api-access-twn2m") pod "b1dab950-5149-4290-adfe-1924a0d5f745" (UID: "b1dab950-5149-4290-adfe-1924a0d5f745"). InnerVolumeSpecName "kube-api-access-twn2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.538936 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1dab950-5149-4290-adfe-1924a0d5f745" (UID: "b1dab950-5149-4290-adfe-1924a0d5f745"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.616224 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.616737 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1dab950-5149-4290-adfe-1924a0d5f745-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.616764 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twn2m\" (UniqueName: \"kubernetes.io/projected/b1dab950-5149-4290-adfe-1924a0d5f745-kube-api-access-twn2m\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.869719 4727 generic.go:334] "Generic (PLEG): container finished" podID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerID="121d8e0d914d51a9f81aba672092f8205aa42a62d584ea7d510c43c7328d6be8" exitCode=0 Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.869804 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k74vs" event={"ID":"bc237982-4faf-431c-8d35-3cc0801bfef7","Type":"ContainerDied","Data":"121d8e0d914d51a9f81aba672092f8205aa42a62d584ea7d510c43c7328d6be8"} Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.875308 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8c9mf" event={"ID":"489bab79-05a2-4b26-afe2-6fcc468e4e97","Type":"ContainerDied","Data":"b670a6aa1c8eba06142bc924062e9f234bc7b625a10facf3698d17984ac76936"} Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.875352 4727 scope.go:117] "RemoveContainer" containerID="569513e50153584cb50ff16948fe5ee626b37c1c4dd6943284345cd9813f9a37" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.876104 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8c9mf" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.879254 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqllm" event={"ID":"b1dab950-5149-4290-adfe-1924a0d5f745","Type":"ContainerDied","Data":"1ab0b04b612400678dd37669ff5fbb86e3bba7482ae2927fcf036f56aee51318"} Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.879512 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqllm" Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.946115 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqllm"] Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.958066 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqllm"] Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.964135 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8c9mf"] Nov 21 20:10:00 crc kubenswrapper[4727]: I1121 20:10:00.966469 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8c9mf"] Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.024251 4727 scope.go:117] "RemoveContainer" containerID="6af23bb30c79d7f1bdfd634aa5a4357a209e09b125ea28409b9a808f9271f87c" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.512067 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489bab79-05a2-4b26-afe2-6fcc468e4e97" path="/var/lib/kubelet/pods/489bab79-05a2-4b26-afe2-6fcc468e4e97/volumes" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.513321 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1dab950-5149-4290-adfe-1924a0d5f745" path="/var/lib/kubelet/pods/b1dab950-5149-4290-adfe-1924a0d5f745/volumes" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.554025 4727 scope.go:117] "RemoveContainer" containerID="d2e56e0d4a3b70d4adf78ba2c3016dd9061b7c2c75907b18295aec269743e3a4" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.620800 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.734700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-catalog-content\") pod \"bc237982-4faf-431c-8d35-3cc0801bfef7\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.734838 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtcmz\" (UniqueName: \"kubernetes.io/projected/bc237982-4faf-431c-8d35-3cc0801bfef7-kube-api-access-gtcmz\") pod \"bc237982-4faf-431c-8d35-3cc0801bfef7\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.734942 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-utilities\") pod \"bc237982-4faf-431c-8d35-3cc0801bfef7\" (UID: \"bc237982-4faf-431c-8d35-3cc0801bfef7\") " Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.735915 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-utilities" (OuterVolumeSpecName: "utilities") pod "bc237982-4faf-431c-8d35-3cc0801bfef7" (UID: "bc237982-4faf-431c-8d35-3cc0801bfef7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.750781 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc237982-4faf-431c-8d35-3cc0801bfef7-kube-api-access-gtcmz" (OuterVolumeSpecName: "kube-api-access-gtcmz") pod "bc237982-4faf-431c-8d35-3cc0801bfef7" (UID: "bc237982-4faf-431c-8d35-3cc0801bfef7"). InnerVolumeSpecName "kube-api-access-gtcmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.835276 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc237982-4faf-431c-8d35-3cc0801bfef7" (UID: "bc237982-4faf-431c-8d35-3cc0801bfef7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.836635 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtcmz\" (UniqueName: \"kubernetes.io/projected/bc237982-4faf-431c-8d35-3cc0801bfef7-kube-api-access-gtcmz\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.836670 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.836687 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc237982-4faf-431c-8d35-3cc0801bfef7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.891669 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k74vs" event={"ID":"bc237982-4faf-431c-8d35-3cc0801bfef7","Type":"ContainerDied","Data":"ceff7a3acec56fd1145c9ba3ea0ea9a4540695de1199166554450e7622a4cac6"} Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.891769 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k74vs" Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.922631 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k74vs"] Nov 21 20:10:01 crc kubenswrapper[4727]: I1121 20:10:01.924912 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k74vs"] Nov 21 20:10:02 crc kubenswrapper[4727]: I1121 20:10:02.707354 4727 scope.go:117] "RemoveContainer" containerID="ab0bd690b5456b39c641fc1714f57dfc4fc2ccc187aef05a1ef63a68ec9fa06e" Nov 21 20:10:02 crc kubenswrapper[4727]: I1121 20:10:02.725218 4727 scope.go:117] "RemoveContainer" containerID="e40de0053ee79e00fe3eb7a607b71998df3bcdca7d6367968080e6a652737285" Nov 21 20:10:02 crc kubenswrapper[4727]: I1121 20:10:02.795410 4727 scope.go:117] "RemoveContainer" containerID="efa67c1f81d33c8a059286d06fc7365035e0206cddb8672db278ba45b8be4e02" Nov 21 20:10:02 crc kubenswrapper[4727]: I1121 20:10:02.820700 4727 scope.go:117] "RemoveContainer" containerID="121d8e0d914d51a9f81aba672092f8205aa42a62d584ea7d510c43c7328d6be8" Nov 21 20:10:02 crc kubenswrapper[4727]: I1121 20:10:02.843902 4727 scope.go:117] "RemoveContainer" containerID="d714fb66d4890e10e52eee8e5f704faf423038cf6b78da19c1447ca803aab966" Nov 21 20:10:02 crc kubenswrapper[4727]: I1121 20:10:02.864779 4727 scope.go:117] "RemoveContainer" containerID="588e206ecd13d2ce0946e7df669373103e27e7c4a375e24320db4f04c9605e33" Nov 21 20:10:03 crc kubenswrapper[4727]: I1121 20:10:03.505710 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc237982-4faf-431c-8d35-3cc0801bfef7" path="/var/lib/kubelet/pods/bc237982-4faf-431c-8d35-3cc0801bfef7/volumes" Nov 21 20:10:03 crc kubenswrapper[4727]: I1121 20:10:03.911461 4727 generic.go:334] "Generic (PLEG): container finished" podID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerID="218c9a24c65f014746dcb4cc70e33d6afac8711789741fe2d5b2429c701d924e" exitCode=0 Nov 21 20:10:03 crc kubenswrapper[4727]: I1121 20:10:03.911535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhstm" event={"ID":"0da39011-6adf-4e8c-81f1-7074d0a7e97b","Type":"ContainerDied","Data":"218c9a24c65f014746dcb4cc70e33d6afac8711789741fe2d5b2429c701d924e"} Nov 21 20:10:03 crc kubenswrapper[4727]: I1121 20:10:03.922885 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nz2zh" event={"ID":"55fe4653-9eee-4a78-8d87-f368aca698b6","Type":"ContainerStarted","Data":"fc54c76e98267fe962caf07649fef9adb1d4530f13a03c8b1bf02ac23e0a3a23"} Nov 21 20:10:03 crc kubenswrapper[4727]: I1121 20:10:03.927312 4727 generic.go:334] "Generic (PLEG): container finished" podID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerID="2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed" exitCode=0 Nov 21 20:10:03 crc kubenswrapper[4727]: I1121 20:10:03.927362 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdpbg" event={"ID":"f364beec-bdfd-41b4-9562-26e8a0a06e27","Type":"ContainerDied","Data":"2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed"} Nov 21 20:10:03 crc kubenswrapper[4727]: I1121 20:10:03.932724 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerID="9a340769ddefdc3e725c6a3ab9e2469721a2784ec27fb820a0dd59a1a0a63443" exitCode=0 Nov 21 20:10:03 crc kubenswrapper[4727]: I1121 20:10:03.932786 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddqc7" event={"ID":"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d","Type":"ContainerDied","Data":"9a340769ddefdc3e725c6a3ab9e2469721a2784ec27fb820a0dd59a1a0a63443"} Nov 21 20:10:03 crc kubenswrapper[4727]: I1121 20:10:03.991075 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nz2zh" podStartSLOduration=2.640615635 podStartE2EDuration="57.991058164s" podCreationTimestamp="2025-11-21 20:09:06 +0000 UTC" firstStartedPulling="2025-11-21 20:09:07.357127642 +0000 UTC m=+152.543312686" lastFinishedPulling="2025-11-21 20:10:02.707570181 +0000 UTC m=+207.893755215" observedRunningTime="2025-11-21 20:10:03.987614617 +0000 UTC m=+209.173799661" watchObservedRunningTime="2025-11-21 20:10:03.991058164 +0000 UTC m=+209.177243198" Nov 21 20:10:04 crc kubenswrapper[4727]: I1121 20:10:04.942273 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddqc7" event={"ID":"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d","Type":"ContainerStarted","Data":"40ceb9592a8a63f071b6efcf9b3a270296c482fda8bc51157dc05bae07a926d8"} Nov 21 20:10:04 crc kubenswrapper[4727]: I1121 20:10:04.945815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhstm" event={"ID":"0da39011-6adf-4e8c-81f1-7074d0a7e97b","Type":"ContainerStarted","Data":"28cb14c3bd2426a003ae85862cdd8c05e693bdd1d5b29f18ec03bb342a7939ee"} Nov 21 20:10:04 crc kubenswrapper[4727]: I1121 20:10:04.948258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdpbg" event={"ID":"f364beec-bdfd-41b4-9562-26e8a0a06e27","Type":"ContainerStarted","Data":"99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778"} Nov 21 20:10:04 crc kubenswrapper[4727]: I1121 20:10:04.959533 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ddqc7" podStartSLOduration=2.762459893 podStartE2EDuration="1m1.959509935s" podCreationTimestamp="2025-11-21 20:09:03 +0000 UTC" firstStartedPulling="2025-11-21 20:09:05.199490303 +0000 UTC m=+150.385675347" lastFinishedPulling="2025-11-21 20:10:04.396540345 +0000 UTC m=+209.582725389" observedRunningTime="2025-11-21 20:10:04.95804262 +0000 UTC m=+210.144227664" watchObservedRunningTime="2025-11-21 20:10:04.959509935 +0000 UTC m=+210.145694979" Nov 21 20:10:04 crc kubenswrapper[4727]: I1121 20:10:04.977793 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jhstm" podStartSLOduration=1.9141912890000001 podStartE2EDuration="59.977772265s" podCreationTimestamp="2025-11-21 20:09:05 +0000 UTC" firstStartedPulling="2025-11-21 20:09:06.300286391 +0000 UTC m=+151.486471435" lastFinishedPulling="2025-11-21 20:10:04.363867367 +0000 UTC m=+209.550052411" observedRunningTime="2025-11-21 20:10:04.976280578 +0000 UTC m=+210.162465622" watchObservedRunningTime="2025-11-21 20:10:04.977772265 +0000 UTC m=+210.163957319" Nov 21 20:10:05 crc kubenswrapper[4727]: I1121 20:10:05.550240 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:10:05 crc kubenswrapper[4727]: I1121 20:10:05.550594 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:10:06 crc kubenswrapper[4727]: I1121 20:10:06.583664 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jhstm" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerName="registry-server" probeResult="failure" output=< Nov 21 20:10:06 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 20:10:06 crc kubenswrapper[4727]: > Nov 21 20:10:06 crc kubenswrapper[4727]: I1121 20:10:06.611202 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:10:06 crc kubenswrapper[4727]: I1121 20:10:06.611322 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:10:07 crc kubenswrapper[4727]: I1121 20:10:07.646373 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nz2zh" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerName="registry-server" probeResult="failure" output=< Nov 21 20:10:07 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 20:10:07 crc kubenswrapper[4727]: > Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.335334 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.335745 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.335817 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.336781 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.336950 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c" gracePeriod=600 Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.557851 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.557901 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.598562 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.619158 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hdpbg" podStartSLOduration=11.422571732 podStartE2EDuration="1m10.619136064s" podCreationTimestamp="2025-11-21 20:09:03 +0000 UTC" firstStartedPulling="2025-11-21 20:09:05.106163319 +0000 UTC m=+150.292348353" lastFinishedPulling="2025-11-21 20:10:04.302727651 +0000 UTC m=+209.488912685" observedRunningTime="2025-11-21 20:10:04.998162701 +0000 UTC m=+210.184347735" watchObservedRunningTime="2025-11-21 20:10:13.619136064 +0000 UTC m=+218.805321118" Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.847038 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.847099 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:10:13 crc kubenswrapper[4727]: I1121 20:10:13.889477 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:10:14 crc kubenswrapper[4727]: I1121 20:10:14.000612 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c" exitCode=0 Nov 21 20:10:14 crc kubenswrapper[4727]: I1121 20:10:14.000820 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c"} Nov 21 20:10:14 crc kubenswrapper[4727]: I1121 20:10:14.048701 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:10:14 crc kubenswrapper[4727]: I1121 20:10:14.052255 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:10:14 crc kubenswrapper[4727]: I1121 20:10:14.861629 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hdpbg"] Nov 21 20:10:15 crc kubenswrapper[4727]: I1121 20:10:15.008125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"7b64e1aad32276f0362b5ee96e2453c18f2a0e0c6f7ada508ab81eab2ebb4231"} Nov 21 20:10:15 crc kubenswrapper[4727]: I1121 20:10:15.605124 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:10:15 crc kubenswrapper[4727]: I1121 20:10:15.657483 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:10:16 crc kubenswrapper[4727]: I1121 20:10:16.014140 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hdpbg" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerName="registry-server" containerID="cri-o://99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778" gracePeriod=2 Nov 21 20:10:16 crc kubenswrapper[4727]: I1121 20:10:16.649353 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:10:16 crc kubenswrapper[4727]: I1121 20:10:16.699284 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:10:16 crc kubenswrapper[4727]: I1121 20:10:16.867344 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.020469 4727 generic.go:334] "Generic (PLEG): container finished" podID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerID="99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778" exitCode=0 Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.020553 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdpbg" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.020617 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdpbg" event={"ID":"f364beec-bdfd-41b4-9562-26e8a0a06e27","Type":"ContainerDied","Data":"99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778"} Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.020755 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdpbg" event={"ID":"f364beec-bdfd-41b4-9562-26e8a0a06e27","Type":"ContainerDied","Data":"dab348cfa1e3dd645e07e8fe28174e356919f7dbedde4d45f3ab2d17934cc7b3"} Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.020783 4727 scope.go:117] "RemoveContainer" containerID="99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.031265 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdw9p\" (UniqueName: \"kubernetes.io/projected/f364beec-bdfd-41b4-9562-26e8a0a06e27-kube-api-access-tdw9p\") pod \"f364beec-bdfd-41b4-9562-26e8a0a06e27\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.031431 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-catalog-content\") pod \"f364beec-bdfd-41b4-9562-26e8a0a06e27\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.031458 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-utilities\") pod \"f364beec-bdfd-41b4-9562-26e8a0a06e27\" (UID: \"f364beec-bdfd-41b4-9562-26e8a0a06e27\") " Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.032418 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-utilities" (OuterVolumeSpecName: "utilities") pod "f364beec-bdfd-41b4-9562-26e8a0a06e27" (UID: "f364beec-bdfd-41b4-9562-26e8a0a06e27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.041125 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f364beec-bdfd-41b4-9562-26e8a0a06e27-kube-api-access-tdw9p" (OuterVolumeSpecName: "kube-api-access-tdw9p") pod "f364beec-bdfd-41b4-9562-26e8a0a06e27" (UID: "f364beec-bdfd-41b4-9562-26e8a0a06e27"). InnerVolumeSpecName "kube-api-access-tdw9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.052313 4727 scope.go:117] "RemoveContainer" containerID="2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.068884 4727 scope.go:117] "RemoveContainer" containerID="137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.085990 4727 scope.go:117] "RemoveContainer" containerID="99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778" Nov 21 20:10:17 crc kubenswrapper[4727]: E1121 20:10:17.086665 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778\": container with ID starting with 99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778 not found: ID does not exist" containerID="99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.086714 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778"} err="failed to get container status \"99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778\": rpc error: code = NotFound desc = could not find container \"99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778\": container with ID starting with 99c949f2a24cb89b28f5f1e8b4e043e75f04ee259a99c77ec4ddfb151ece1778 not found: ID does not exist" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.086744 4727 scope.go:117] "RemoveContainer" containerID="2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed" Nov 21 20:10:17 crc kubenswrapper[4727]: E1121 20:10:17.087075 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed\": container with ID starting with 2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed not found: ID does not exist" containerID="2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.087111 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed"} err="failed to get container status \"2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed\": rpc error: code = NotFound desc = could not find container \"2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed\": container with ID starting with 2d410da2968a5cdd0996be4c1bedd3a6c4cf25073f4324cd9a1993d4c73f71ed not found: ID does not exist" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.087142 4727 scope.go:117] "RemoveContainer" containerID="137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8" Nov 21 20:10:17 crc kubenswrapper[4727]: E1121 20:10:17.087388 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8\": container with ID starting with 137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8 not found: ID does not exist" containerID="137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.087605 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8"} err="failed to get container status \"137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8\": rpc error: code = NotFound desc = could not find container \"137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8\": container with ID starting with 137b6e6548ccd659dc1c63db8bd8d924e1ab3aed5137b09ae5c9a225a63c70a8 not found: ID does not exist" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.102157 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f364beec-bdfd-41b4-9562-26e8a0a06e27" (UID: "f364beec-bdfd-41b4-9562-26e8a0a06e27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.133902 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.133979 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f364beec-bdfd-41b4-9562-26e8a0a06e27-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.133994 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdw9p\" (UniqueName: \"kubernetes.io/projected/f364beec-bdfd-41b4-9562-26e8a0a06e27-kube-api-access-tdw9p\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.347879 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hdpbg"] Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.358131 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hdpbg"] Nov 21 20:10:17 crc kubenswrapper[4727]: E1121 20:10:17.421151 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf364beec_bdfd_41b4_9562_26e8a0a06e27.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf364beec_bdfd_41b4_9562_26e8a0a06e27.slice/crio-dab348cfa1e3dd645e07e8fe28174e356919f7dbedde4d45f3ab2d17934cc7b3\": RecentStats: unable to find data in memory cache]" Nov 21 20:10:17 crc kubenswrapper[4727]: I1121 20:10:17.506561 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" path="/var/lib/kubelet/pods/f364beec-bdfd-41b4-9562-26e8a0a06e27/volumes" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.200170 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" podUID="1599f256-87d8-47f4-a6fb-6cea3b58b242" containerName="oauth-openshift" containerID="cri-o://f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255" gracePeriod=15 Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.612302 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681158 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-error\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681223 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-router-certs\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681254 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-serving-cert\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681297 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-policies\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681330 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-idp-0-file-data\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681364 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-ocp-branding-template\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681395 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l724\" (UniqueName: \"kubernetes.io/projected/1599f256-87d8-47f4-a6fb-6cea3b58b242-kube-api-access-5l724\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681439 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-dir\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681473 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-trusted-ca-bundle\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681510 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-service-ca\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681557 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-provider-selection\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681587 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-session\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681628 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-login\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.681668 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-cliconfig\") pod \"1599f256-87d8-47f4-a6fb-6cea3b58b242\" (UID: \"1599f256-87d8-47f4-a6fb-6cea3b58b242\") " Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.682373 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.682919 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.683105 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.683537 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.683848 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.688645 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.688921 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.689242 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.689799 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.691127 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.694207 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1599f256-87d8-47f4-a6fb-6cea3b58b242-kube-api-access-5l724" (OuterVolumeSpecName: "kube-api-access-5l724") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "kube-api-access-5l724". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.695183 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.695472 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.696311 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "1599f256-87d8-47f4-a6fb-6cea3b58b242" (UID: "1599f256-87d8-47f4-a6fb-6cea3b58b242"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783163 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783221 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l724\" (UniqueName: \"kubernetes.io/projected/1599f256-87d8-47f4-a6fb-6cea3b58b242-kube-api-access-5l724\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783242 4727 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783262 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783280 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783300 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783319 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783337 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783355 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783373 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783400 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783420 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783438 4727 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1599f256-87d8-47f4-a6fb-6cea3b58b242-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:20 crc kubenswrapper[4727]: I1121 20:10:20.783458 4727 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1599f256-87d8-47f4-a6fb-6cea3b58b242-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:21 crc kubenswrapper[4727]: I1121 20:10:21.048636 4727 generic.go:334] "Generic (PLEG): container finished" podID="1599f256-87d8-47f4-a6fb-6cea3b58b242" containerID="f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255" exitCode=0 Nov 21 20:10:21 crc kubenswrapper[4727]: I1121 20:10:21.048707 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" event={"ID":"1599f256-87d8-47f4-a6fb-6cea3b58b242","Type":"ContainerDied","Data":"f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255"} Nov 21 20:10:21 crc kubenswrapper[4727]: I1121 20:10:21.048762 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" event={"ID":"1599f256-87d8-47f4-a6fb-6cea3b58b242","Type":"ContainerDied","Data":"2cbde6e30d5ee8a1bbb0d07695b4e4389d6726fecd9f40b63aa1209742ded9e9"} Nov 21 20:10:21 crc kubenswrapper[4727]: I1121 20:10:21.048773 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zqbhr" Nov 21 20:10:21 crc kubenswrapper[4727]: I1121 20:10:21.048788 4727 scope.go:117] "RemoveContainer" containerID="f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255" Nov 21 20:10:21 crc kubenswrapper[4727]: I1121 20:10:21.082418 4727 scope.go:117] "RemoveContainer" containerID="f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255" Nov 21 20:10:21 crc kubenswrapper[4727]: E1121 20:10:21.083169 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255\": container with ID starting with f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255 not found: ID does not exist" containerID="f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255" Nov 21 20:10:21 crc kubenswrapper[4727]: I1121 20:10:21.083212 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255"} err="failed to get container status \"f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255\": rpc error: code = NotFound desc = could not find container \"f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255\": container with ID starting with f7e1c2070c3fd44b9343c9313a06cef810fd449aab70f09882b5057b0a3f9255 not found: ID does not exist" Nov 21 20:10:21 crc kubenswrapper[4727]: I1121 20:10:21.098289 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zqbhr"] Nov 21 20:10:21 crc kubenswrapper[4727]: I1121 20:10:21.103227 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zqbhr"] Nov 21 20:10:21 crc kubenswrapper[4727]: I1121 20:10:21.509099 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1599f256-87d8-47f4-a6fb-6cea3b58b242" path="/var/lib/kubelet/pods/1599f256-87d8-47f4-a6fb-6cea3b58b242/volumes" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.659084 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7c4675448c-fz4sj"] Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.659882 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1dab950-5149-4290-adfe-1924a0d5f745" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.659904 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1dab950-5149-4290-adfe-1924a0d5f745" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.659915 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerName="extract-utilities" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.659923 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerName="extract-utilities" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.659932 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerName="extract-utilities" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.659939 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerName="extract-utilities" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.659975 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerName="extract-content" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.659983 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerName="extract-content" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.659997 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerName="extract-utilities" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660004 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerName="extract-utilities" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.660015 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerName="extract-content" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660023 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerName="extract-content" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.660037 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1dab950-5149-4290-adfe-1924a0d5f745" containerName="extract-content" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660044 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1dab950-5149-4290-adfe-1924a0d5f745" containerName="extract-content" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.660055 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e21f0ea-ebbf-4575-88cb-cd19fccb688f" containerName="pruner" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660062 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e21f0ea-ebbf-4575-88cb-cd19fccb688f" containerName="pruner" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.660074 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerName="extract-content" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660085 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerName="extract-content" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.660095 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1599f256-87d8-47f4-a6fb-6cea3b58b242" containerName="oauth-openshift" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660103 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1599f256-87d8-47f4-a6fb-6cea3b58b242" containerName="oauth-openshift" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.660114 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1dab950-5149-4290-adfe-1924a0d5f745" containerName="extract-utilities" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660122 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1dab950-5149-4290-adfe-1924a0d5f745" containerName="extract-utilities" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.660131 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660138 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.660147 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473b8084-b901-4f0b-9aef-74e59805f083" containerName="pruner" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660154 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="473b8084-b901-4f0b-9aef-74e59805f083" containerName="pruner" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.660168 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660175 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: E1121 20:10:25.660186 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660194 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660348 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f364beec-bdfd-41b4-9562-26e8a0a06e27" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660362 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc237982-4faf-431c-8d35-3cc0801bfef7" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660375 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e21f0ea-ebbf-4575-88cb-cd19fccb688f" containerName="pruner" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660388 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1dab950-5149-4290-adfe-1924a0d5f745" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660396 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1599f256-87d8-47f4-a6fb-6cea3b58b242" containerName="oauth-openshift" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660408 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="473b8084-b901-4f0b-9aef-74e59805f083" containerName="pruner" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660418 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="489bab79-05a2-4b26-afe2-6fcc468e4e97" containerName="registry-server" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.660862 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.663404 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.664134 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.664168 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.664385 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.664873 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666133 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666170 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666216 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-template-error\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666241 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666295 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-audit-policies\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666321 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-template-login\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666341 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666411 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666447 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-session\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666468 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666496 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpss9\" (UniqueName: \"kubernetes.io/projected/b84ea483-7909-4d3f-acae-58f241831e80-kube-api-access-lpss9\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666528 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b84ea483-7909-4d3f-acae-58f241831e80-audit-dir\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.666569 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.670233 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.670600 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.674929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.675040 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.675093 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.675577 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.675905 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.686941 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c4675448c-fz4sj"] Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.691166 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.698341 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.700286 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.767743 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.767780 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.767805 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.767829 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-template-error\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.767846 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.767873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-audit-policies\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.767890 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-template-login\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.767906 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.767947 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.768251 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-session\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.768309 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.768340 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpss9\" (UniqueName: \"kubernetes.io/projected/b84ea483-7909-4d3f-acae-58f241831e80-kube-api-access-lpss9\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.768396 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b84ea483-7909-4d3f-acae-58f241831e80-audit-dir\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.768553 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.769579 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-audit-policies\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.769673 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b84ea483-7909-4d3f-acae-58f241831e80-audit-dir\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.770674 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.771562 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.771647 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.774034 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-template-error\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.774240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-session\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.774342 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.774440 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.774698 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.774774 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.775621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.778348 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b84ea483-7909-4d3f-acae-58f241831e80-v4-0-config-user-template-login\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.783514 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpss9\" (UniqueName: \"kubernetes.io/projected/b84ea483-7909-4d3f-acae-58f241831e80-kube-api-access-lpss9\") pod \"oauth-openshift-7c4675448c-fz4sj\" (UID: \"b84ea483-7909-4d3f-acae-58f241831e80\") " pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:25 crc kubenswrapper[4727]: I1121 20:10:25.998208 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:26 crc kubenswrapper[4727]: I1121 20:10:26.228717 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c4675448c-fz4sj"] Nov 21 20:10:27 crc kubenswrapper[4727]: I1121 20:10:27.086719 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" event={"ID":"b84ea483-7909-4d3f-acae-58f241831e80","Type":"ContainerStarted","Data":"0e0515a5b2d0c11ce6cb7d7c01f688c523cff04aa24b79dc945545820c5bd8c1"} Nov 21 20:10:27 crc kubenswrapper[4727]: I1121 20:10:27.088270 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:27 crc kubenswrapper[4727]: I1121 20:10:27.088375 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" event={"ID":"b84ea483-7909-4d3f-acae-58f241831e80","Type":"ContainerStarted","Data":"40bf13b7b967125b81605ad3e345ea405ce6cc60fb032f898868e2f5da5f2144"} Nov 21 20:10:27 crc kubenswrapper[4727]: I1121 20:10:27.098569 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" Nov 21 20:10:27 crc kubenswrapper[4727]: I1121 20:10:27.118158 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7c4675448c-fz4sj" podStartSLOduration=32.118134862 podStartE2EDuration="32.118134862s" podCreationTimestamp="2025-11-21 20:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:10:27.114125187 +0000 UTC m=+232.300310311" watchObservedRunningTime="2025-11-21 20:10:27.118134862 +0000 UTC m=+232.304319916" Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.813450 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6kxgt"] Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.815618 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6kxgt" podUID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerName="registry-server" containerID="cri-o://f64ac89c79a98a60dabd276a475a82308befa98d62ee24100515baf63df7afed" gracePeriod=30 Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.826479 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ddqc7"] Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.827054 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ddqc7" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerName="registry-server" containerID="cri-o://40ceb9592a8a63f071b6efcf9b3a270296c482fda8bc51157dc05bae07a926d8" gracePeriod=30 Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.829174 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fn2j6"] Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.829423 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" podUID="42408e00-5bcd-4405-82fe-851c9b62b149" containerName="marketplace-operator" containerID="cri-o://de8be5db244198a1c714093616a8010bf778e13e40231fc9b8b2ae0fc1541d58" gracePeriod=30 Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.845203 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lkvmp"] Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.846139 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.863810 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhstm"] Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.864119 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jhstm" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerName="registry-server" containerID="cri-o://28cb14c3bd2426a003ae85862cdd8c05e693bdd1d5b29f18ec03bb342a7939ee" gracePeriod=30 Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.883200 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nz2zh"] Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.883771 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nz2zh" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerName="registry-server" containerID="cri-o://fc54c76e98267fe962caf07649fef9adb1d4530f13a03c8b1bf02ac23e0a3a23" gracePeriod=30 Nov 21 20:10:44 crc kubenswrapper[4727]: I1121 20:10:44.912602 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lkvmp"] Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.023864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87d25\" (UniqueName: \"kubernetes.io/projected/ba6217c1-bde3-455b-a45d-bcf8001b7a16-kube-api-access-87d25\") pod \"marketplace-operator-79b997595-lkvmp\" (UID: \"ba6217c1-bde3-455b-a45d-bcf8001b7a16\") " pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.023916 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ba6217c1-bde3-455b-a45d-bcf8001b7a16-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lkvmp\" (UID: \"ba6217c1-bde3-455b-a45d-bcf8001b7a16\") " pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.024024 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba6217c1-bde3-455b-a45d-bcf8001b7a16-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lkvmp\" (UID: \"ba6217c1-bde3-455b-a45d-bcf8001b7a16\") " pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.125255 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba6217c1-bde3-455b-a45d-bcf8001b7a16-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lkvmp\" (UID: \"ba6217c1-bde3-455b-a45d-bcf8001b7a16\") " pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.125306 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87d25\" (UniqueName: \"kubernetes.io/projected/ba6217c1-bde3-455b-a45d-bcf8001b7a16-kube-api-access-87d25\") pod \"marketplace-operator-79b997595-lkvmp\" (UID: \"ba6217c1-bde3-455b-a45d-bcf8001b7a16\") " pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.125330 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ba6217c1-bde3-455b-a45d-bcf8001b7a16-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lkvmp\" (UID: \"ba6217c1-bde3-455b-a45d-bcf8001b7a16\") " pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.126830 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba6217c1-bde3-455b-a45d-bcf8001b7a16-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lkvmp\" (UID: \"ba6217c1-bde3-455b-a45d-bcf8001b7a16\") " pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.144244 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ba6217c1-bde3-455b-a45d-bcf8001b7a16-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lkvmp\" (UID: \"ba6217c1-bde3-455b-a45d-bcf8001b7a16\") " pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.151051 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87d25\" (UniqueName: \"kubernetes.io/projected/ba6217c1-bde3-455b-a45d-bcf8001b7a16-kube-api-access-87d25\") pod \"marketplace-operator-79b997595-lkvmp\" (UID: \"ba6217c1-bde3-455b-a45d-bcf8001b7a16\") " pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.173114 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.188066 4727 generic.go:334] "Generic (PLEG): container finished" podID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerID="28cb14c3bd2426a003ae85862cdd8c05e693bdd1d5b29f18ec03bb342a7939ee" exitCode=0 Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.188164 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhstm" event={"ID":"0da39011-6adf-4e8c-81f1-7074d0a7e97b","Type":"ContainerDied","Data":"28cb14c3bd2426a003ae85862cdd8c05e693bdd1d5b29f18ec03bb342a7939ee"} Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.189339 4727 generic.go:334] "Generic (PLEG): container finished" podID="42408e00-5bcd-4405-82fe-851c9b62b149" containerID="de8be5db244198a1c714093616a8010bf778e13e40231fc9b8b2ae0fc1541d58" exitCode=0 Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.189389 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" event={"ID":"42408e00-5bcd-4405-82fe-851c9b62b149","Type":"ContainerDied","Data":"de8be5db244198a1c714093616a8010bf778e13e40231fc9b8b2ae0fc1541d58"} Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.193303 4727 generic.go:334] "Generic (PLEG): container finished" podID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerID="fc54c76e98267fe962caf07649fef9adb1d4530f13a03c8b1bf02ac23e0a3a23" exitCode=0 Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.193405 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nz2zh" event={"ID":"55fe4653-9eee-4a78-8d87-f368aca698b6","Type":"ContainerDied","Data":"fc54c76e98267fe962caf07649fef9adb1d4530f13a03c8b1bf02ac23e0a3a23"} Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.195208 4727 generic.go:334] "Generic (PLEG): container finished" podID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerID="f64ac89c79a98a60dabd276a475a82308befa98d62ee24100515baf63df7afed" exitCode=0 Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.195246 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kxgt" event={"ID":"8bcc1896-81fc-4fce-bd16-aa2bc5350617","Type":"ContainerDied","Data":"f64ac89c79a98a60dabd276a475a82308befa98d62ee24100515baf63df7afed"} Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.197145 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerID="40ceb9592a8a63f071b6efcf9b3a270296c482fda8bc51157dc05bae07a926d8" exitCode=0 Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.197171 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddqc7" event={"ID":"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d","Type":"ContainerDied","Data":"40ceb9592a8a63f071b6efcf9b3a270296c482fda8bc51157dc05bae07a926d8"} Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.248681 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.310930 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.336401 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.349932 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.353499 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.431639 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92xx2\" (UniqueName: \"kubernetes.io/projected/8bcc1896-81fc-4fce-bd16-aa2bc5350617-kube-api-access-92xx2\") pod \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.431688 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-utilities\") pod \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.431733 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx2fd\" (UniqueName: \"kubernetes.io/projected/42408e00-5bcd-4405-82fe-851c9b62b149-kube-api-access-vx2fd\") pod \"42408e00-5bcd-4405-82fe-851c9b62b149\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.431760 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-catalog-content\") pod \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\" (UID: \"8bcc1896-81fc-4fce-bd16-aa2bc5350617\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.431785 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-operator-metrics\") pod \"42408e00-5bcd-4405-82fe-851c9b62b149\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.431818 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-trusted-ca\") pod \"42408e00-5bcd-4405-82fe-851c9b62b149\" (UID: \"42408e00-5bcd-4405-82fe-851c9b62b149\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.431837 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-catalog-content\") pod \"55fe4653-9eee-4a78-8d87-f368aca698b6\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.431872 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-utilities\") pod \"55fe4653-9eee-4a78-8d87-f368aca698b6\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.431892 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8b5b\" (UniqueName: \"kubernetes.io/projected/55fe4653-9eee-4a78-8d87-f368aca698b6-kube-api-access-x8b5b\") pod \"55fe4653-9eee-4a78-8d87-f368aca698b6\" (UID: \"55fe4653-9eee-4a78-8d87-f368aca698b6\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.433106 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-utilities" (OuterVolumeSpecName: "utilities") pod "8bcc1896-81fc-4fce-bd16-aa2bc5350617" (UID: "8bcc1896-81fc-4fce-bd16-aa2bc5350617"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.433688 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "42408e00-5bcd-4405-82fe-851c9b62b149" (UID: "42408e00-5bcd-4405-82fe-851c9b62b149"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.434392 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-utilities" (OuterVolumeSpecName: "utilities") pod "55fe4653-9eee-4a78-8d87-f368aca698b6" (UID: "55fe4653-9eee-4a78-8d87-f368aca698b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.437717 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bcc1896-81fc-4fce-bd16-aa2bc5350617-kube-api-access-92xx2" (OuterVolumeSpecName: "kube-api-access-92xx2") pod "8bcc1896-81fc-4fce-bd16-aa2bc5350617" (UID: "8bcc1896-81fc-4fce-bd16-aa2bc5350617"). InnerVolumeSpecName "kube-api-access-92xx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.437993 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "42408e00-5bcd-4405-82fe-851c9b62b149" (UID: "42408e00-5bcd-4405-82fe-851c9b62b149"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.438109 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42408e00-5bcd-4405-82fe-851c9b62b149-kube-api-access-vx2fd" (OuterVolumeSpecName: "kube-api-access-vx2fd") pod "42408e00-5bcd-4405-82fe-851c9b62b149" (UID: "42408e00-5bcd-4405-82fe-851c9b62b149"). InnerVolumeSpecName "kube-api-access-vx2fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.438155 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55fe4653-9eee-4a78-8d87-f368aca698b6-kube-api-access-x8b5b" (OuterVolumeSpecName: "kube-api-access-x8b5b") pod "55fe4653-9eee-4a78-8d87-f368aca698b6" (UID: "55fe4653-9eee-4a78-8d87-f368aca698b6"). InnerVolumeSpecName "kube-api-access-x8b5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.496585 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bcc1896-81fc-4fce-bd16-aa2bc5350617" (UID: "8bcc1896-81fc-4fce-bd16-aa2bc5350617"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.525842 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55fe4653-9eee-4a78-8d87-f368aca698b6" (UID: "55fe4653-9eee-4a78-8d87-f368aca698b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533167 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbcqk\" (UniqueName: \"kubernetes.io/projected/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-kube-api-access-tbcqk\") pod \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533234 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-utilities\") pod \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533282 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-catalog-content\") pod \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\" (UID: \"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533307 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-catalog-content\") pod \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533331 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g46js\" (UniqueName: \"kubernetes.io/projected/0da39011-6adf-4e8c-81f1-7074d0a7e97b-kube-api-access-g46js\") pod \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533427 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-utilities\") pod \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\" (UID: \"0da39011-6adf-4e8c-81f1-7074d0a7e97b\") " Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533695 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx2fd\" (UniqueName: \"kubernetes.io/projected/42408e00-5bcd-4405-82fe-851c9b62b149-kube-api-access-vx2fd\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533717 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533730 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533747 4727 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42408e00-5bcd-4405-82fe-851c9b62b149-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533760 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533773 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55fe4653-9eee-4a78-8d87-f368aca698b6-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533786 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8b5b\" (UniqueName: \"kubernetes.io/projected/55fe4653-9eee-4a78-8d87-f368aca698b6-kube-api-access-x8b5b\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533798 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92xx2\" (UniqueName: \"kubernetes.io/projected/8bcc1896-81fc-4fce-bd16-aa2bc5350617-kube-api-access-92xx2\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.533810 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bcc1896-81fc-4fce-bd16-aa2bc5350617-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.534286 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-utilities" (OuterVolumeSpecName: "utilities") pod "0da39011-6adf-4e8c-81f1-7074d0a7e97b" (UID: "0da39011-6adf-4e8c-81f1-7074d0a7e97b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.534946 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-utilities" (OuterVolumeSpecName: "utilities") pod "b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" (UID: "b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.535551 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da39011-6adf-4e8c-81f1-7074d0a7e97b-kube-api-access-g46js" (OuterVolumeSpecName: "kube-api-access-g46js") pod "0da39011-6adf-4e8c-81f1-7074d0a7e97b" (UID: "0da39011-6adf-4e8c-81f1-7074d0a7e97b"). InnerVolumeSpecName "kube-api-access-g46js". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.535637 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-kube-api-access-tbcqk" (OuterVolumeSpecName: "kube-api-access-tbcqk") pod "b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" (UID: "b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d"). InnerVolumeSpecName "kube-api-access-tbcqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.553795 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0da39011-6adf-4e8c-81f1-7074d0a7e97b" (UID: "0da39011-6adf-4e8c-81f1-7074d0a7e97b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.596073 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" (UID: "b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.600436 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lkvmp"] Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.635090 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbcqk\" (UniqueName: \"kubernetes.io/projected/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-kube-api-access-tbcqk\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.635121 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.635133 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.635143 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g46js\" (UniqueName: \"kubernetes.io/projected/0da39011-6adf-4e8c-81f1-7074d0a7e97b-kube-api-access-g46js\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.635154 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:45 crc kubenswrapper[4727]: I1121 20:10:45.635162 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da39011-6adf-4e8c-81f1-7074d0a7e97b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.206082 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddqc7" event={"ID":"b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d","Type":"ContainerDied","Data":"a452681e54fd32f0a2c011391e53da981feff77fc7aae36175d1095f3b7220eb"} Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.206382 4727 scope.go:117] "RemoveContainer" containerID="40ceb9592a8a63f071b6efcf9b3a270296c482fda8bc51157dc05bae07a926d8" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.206110 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddqc7" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.210923 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhstm" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.211187 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhstm" event={"ID":"0da39011-6adf-4e8c-81f1-7074d0a7e97b","Type":"ContainerDied","Data":"2c4fc03b9ce499dd11f021d13632f117918af37eefe03e1ece820772c43adc75"} Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.214568 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" event={"ID":"42408e00-5bcd-4405-82fe-851c9b62b149","Type":"ContainerDied","Data":"3560d57f28c940828a166713d6e04c7c0947819018eb906e1436740e91e41684"} Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.214611 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fn2j6" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.219946 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nz2zh" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.219984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nz2zh" event={"ID":"55fe4653-9eee-4a78-8d87-f368aca698b6","Type":"ContainerDied","Data":"c475a96626aaba17153d94cfca8b2242a30a418431dea9122e64cbd2bb962094"} Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.222183 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" event={"ID":"ba6217c1-bde3-455b-a45d-bcf8001b7a16","Type":"ContainerStarted","Data":"7374dc135f274a1fe60df2f354b970b4670fabc05ed9f4bedfe3c654993ef4b4"} Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.222212 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" event={"ID":"ba6217c1-bde3-455b-a45d-bcf8001b7a16","Type":"ContainerStarted","Data":"c3d6d8188e4563f5fb647d656789c084216a93cfe25896176400fe91fadc81da"} Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.222527 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.224092 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kxgt" event={"ID":"8bcc1896-81fc-4fce-bd16-aa2bc5350617","Type":"ContainerDied","Data":"a273d4aa1fb34135743936893565df0a66b20427813c89645935ae4ce06e5ad3"} Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.224151 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kxgt" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.225071 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.227769 4727 scope.go:117] "RemoveContainer" containerID="9a340769ddefdc3e725c6a3ab9e2469721a2784ec27fb820a0dd59a1a0a63443" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.237660 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lkvmp" podStartSLOduration=2.237639302 podStartE2EDuration="2.237639302s" podCreationTimestamp="2025-11-21 20:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:10:46.235714252 +0000 UTC m=+251.421899296" watchObservedRunningTime="2025-11-21 20:10:46.237639302 +0000 UTC m=+251.423824346" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.257294 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ddqc7"] Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.260551 4727 scope.go:117] "RemoveContainer" containerID="10d28d6fb66add7253d46853051cbdcbd00d647bee7099cc3d348d1e337cec09" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.261485 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ddqc7"] Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.264581 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fn2j6"] Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.269507 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fn2j6"] Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.280897 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhstm"] Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.283875 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhstm"] Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.296123 4727 scope.go:117] "RemoveContainer" containerID="28cb14c3bd2426a003ae85862cdd8c05e693bdd1d5b29f18ec03bb342a7939ee" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.304227 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nz2zh"] Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.310378 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nz2zh"] Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.315239 4727 scope.go:117] "RemoveContainer" containerID="218c9a24c65f014746dcb4cc70e33d6afac8711789741fe2d5b2429c701d924e" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.319510 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6kxgt"] Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.324533 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6kxgt"] Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.345311 4727 scope.go:117] "RemoveContainer" containerID="e7f27c3a6f029ba03160cef9f9f22c46194db68b89ede3a33e4f9619b9c5e965" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.371001 4727 scope.go:117] "RemoveContainer" containerID="de8be5db244198a1c714093616a8010bf778e13e40231fc9b8b2ae0fc1541d58" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.387289 4727 scope.go:117] "RemoveContainer" containerID="fc54c76e98267fe962caf07649fef9adb1d4530f13a03c8b1bf02ac23e0a3a23" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.401220 4727 scope.go:117] "RemoveContainer" containerID="c3ae56203c760b59ccb7a3aaffc351c6f8017d89aa7371e8480a058b9e28259f" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.415881 4727 scope.go:117] "RemoveContainer" containerID="6d5a383b906f9d6ea7f5a049215a770cf21c083a7ffd9adf0b40a65aa7b2d8a4" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.427947 4727 scope.go:117] "RemoveContainer" containerID="f64ac89c79a98a60dabd276a475a82308befa98d62ee24100515baf63df7afed" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.441204 4727 scope.go:117] "RemoveContainer" containerID="a878fb77411b2b97cf3168b3b947615aa9dcc32f74bc0668b733d050bd351a4d" Nov 21 20:10:46 crc kubenswrapper[4727]: I1121 20:10:46.454581 4727 scope.go:117] "RemoveContainer" containerID="6db4863caad19f090589f6239baa69b5ae46c4949f5e63de2437efa1b4b73183" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022528 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mqhv2"] Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022788 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerName="extract-content" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022802 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerName="extract-content" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022816 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerName="extract-utilities" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022823 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerName="extract-utilities" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022834 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerName="extract-content" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022842 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerName="extract-content" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022850 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerName="extract-content" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022857 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerName="extract-content" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022871 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42408e00-5bcd-4405-82fe-851c9b62b149" containerName="marketplace-operator" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022878 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="42408e00-5bcd-4405-82fe-851c9b62b149" containerName="marketplace-operator" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022890 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerName="extract-utilities" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022897 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerName="extract-utilities" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022911 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022918 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022926 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022933 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022947 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerName="extract-utilities" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022971 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerName="extract-utilities" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022981 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.022988 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.022996 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.023003 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.023013 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerName="extract-content" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.023020 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerName="extract-content" Nov 21 20:10:47 crc kubenswrapper[4727]: E1121 20:10:47.023029 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerName="extract-utilities" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.023037 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerName="extract-utilities" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.023135 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.023164 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="42408e00-5bcd-4405-82fe-851c9b62b149" containerName="marketplace-operator" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.023175 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.023184 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.023197 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" containerName="registry-server" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.024016 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.025720 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.033211 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqhv2"] Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.156113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26k8\" (UniqueName: \"kubernetes.io/projected/552d0b06-98da-4bb3-b86d-4a3ac341ad99-kube-api-access-w26k8\") pod \"community-operators-mqhv2\" (UID: \"552d0b06-98da-4bb3-b86d-4a3ac341ad99\") " pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.156163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552d0b06-98da-4bb3-b86d-4a3ac341ad99-utilities\") pod \"community-operators-mqhv2\" (UID: \"552d0b06-98da-4bb3-b86d-4a3ac341ad99\") " pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.156185 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552d0b06-98da-4bb3-b86d-4a3ac341ad99-catalog-content\") pod \"community-operators-mqhv2\" (UID: \"552d0b06-98da-4bb3-b86d-4a3ac341ad99\") " pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.222571 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n8vql"] Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.223540 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.225356 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.255921 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8vql"] Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.257841 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w26k8\" (UniqueName: \"kubernetes.io/projected/552d0b06-98da-4bb3-b86d-4a3ac341ad99-kube-api-access-w26k8\") pod \"community-operators-mqhv2\" (UID: \"552d0b06-98da-4bb3-b86d-4a3ac341ad99\") " pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.257914 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552d0b06-98da-4bb3-b86d-4a3ac341ad99-utilities\") pod \"community-operators-mqhv2\" (UID: \"552d0b06-98da-4bb3-b86d-4a3ac341ad99\") " pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.257948 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552d0b06-98da-4bb3-b86d-4a3ac341ad99-catalog-content\") pod \"community-operators-mqhv2\" (UID: \"552d0b06-98da-4bb3-b86d-4a3ac341ad99\") " pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.258994 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/552d0b06-98da-4bb3-b86d-4a3ac341ad99-catalog-content\") pod \"community-operators-mqhv2\" (UID: \"552d0b06-98da-4bb3-b86d-4a3ac341ad99\") " pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.259128 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/552d0b06-98da-4bb3-b86d-4a3ac341ad99-utilities\") pod \"community-operators-mqhv2\" (UID: \"552d0b06-98da-4bb3-b86d-4a3ac341ad99\") " pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.281095 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w26k8\" (UniqueName: \"kubernetes.io/projected/552d0b06-98da-4bb3-b86d-4a3ac341ad99-kube-api-access-w26k8\") pod \"community-operators-mqhv2\" (UID: \"552d0b06-98da-4bb3-b86d-4a3ac341ad99\") " pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.339644 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.358826 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xfx7\" (UniqueName: \"kubernetes.io/projected/7cbf0484-db14-4c19-8944-5da1652fe052-kube-api-access-9xfx7\") pod \"certified-operators-n8vql\" (UID: \"7cbf0484-db14-4c19-8944-5da1652fe052\") " pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.358873 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cbf0484-db14-4c19-8944-5da1652fe052-utilities\") pod \"certified-operators-n8vql\" (UID: \"7cbf0484-db14-4c19-8944-5da1652fe052\") " pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.358931 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cbf0484-db14-4c19-8944-5da1652fe052-catalog-content\") pod \"certified-operators-n8vql\" (UID: \"7cbf0484-db14-4c19-8944-5da1652fe052\") " pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.459830 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cbf0484-db14-4c19-8944-5da1652fe052-catalog-content\") pod \"certified-operators-n8vql\" (UID: \"7cbf0484-db14-4c19-8944-5da1652fe052\") " pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.459927 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xfx7\" (UniqueName: \"kubernetes.io/projected/7cbf0484-db14-4c19-8944-5da1652fe052-kube-api-access-9xfx7\") pod \"certified-operators-n8vql\" (UID: \"7cbf0484-db14-4c19-8944-5da1652fe052\") " pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.459981 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cbf0484-db14-4c19-8944-5da1652fe052-utilities\") pod \"certified-operators-n8vql\" (UID: \"7cbf0484-db14-4c19-8944-5da1652fe052\") " pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.460656 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cbf0484-db14-4c19-8944-5da1652fe052-catalog-content\") pod \"certified-operators-n8vql\" (UID: \"7cbf0484-db14-4c19-8944-5da1652fe052\") " pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.460727 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cbf0484-db14-4c19-8944-5da1652fe052-utilities\") pod \"certified-operators-n8vql\" (UID: \"7cbf0484-db14-4c19-8944-5da1652fe052\") " pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.480620 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xfx7\" (UniqueName: \"kubernetes.io/projected/7cbf0484-db14-4c19-8944-5da1652fe052-kube-api-access-9xfx7\") pod \"certified-operators-n8vql\" (UID: \"7cbf0484-db14-4c19-8944-5da1652fe052\") " pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.508211 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da39011-6adf-4e8c-81f1-7074d0a7e97b" path="/var/lib/kubelet/pods/0da39011-6adf-4e8c-81f1-7074d0a7e97b/volumes" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.508897 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42408e00-5bcd-4405-82fe-851c9b62b149" path="/var/lib/kubelet/pods/42408e00-5bcd-4405-82fe-851c9b62b149/volumes" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.509412 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55fe4653-9eee-4a78-8d87-f368aca698b6" path="/var/lib/kubelet/pods/55fe4653-9eee-4a78-8d87-f368aca698b6/volumes" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.510438 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bcc1896-81fc-4fce-bd16-aa2bc5350617" path="/var/lib/kubelet/pods/8bcc1896-81fc-4fce-bd16-aa2bc5350617/volumes" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.511048 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d" path="/var/lib/kubelet/pods/b7b729cf-df56-4bfd-ad9a-8beb0e56ad6d/volumes" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.546468 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.721286 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8vql"] Nov 21 20:10:47 crc kubenswrapper[4727]: W1121 20:10:47.727588 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cbf0484_db14_4c19_8944_5da1652fe052.slice/crio-a4c1728a15eae355c3dcc7f4b7b67b60805f1d4fb18ad2f1f834e683deccefcc WatchSource:0}: Error finding container a4c1728a15eae355c3dcc7f4b7b67b60805f1d4fb18ad2f1f834e683deccefcc: Status 404 returned error can't find the container with id a4c1728a15eae355c3dcc7f4b7b67b60805f1d4fb18ad2f1f834e683deccefcc Nov 21 20:10:47 crc kubenswrapper[4727]: I1121 20:10:47.736096 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqhv2"] Nov 21 20:10:47 crc kubenswrapper[4727]: W1121 20:10:47.743985 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552d0b06_98da_4bb3_b86d_4a3ac341ad99.slice/crio-f79113ebdc3d475deb5c67c8513a770d43489a5d2fa96db2a6bf694473138766 WatchSource:0}: Error finding container f79113ebdc3d475deb5c67c8513a770d43489a5d2fa96db2a6bf694473138766: Status 404 returned error can't find the container with id f79113ebdc3d475deb5c67c8513a770d43489a5d2fa96db2a6bf694473138766 Nov 21 20:10:48 crc kubenswrapper[4727]: I1121 20:10:48.271262 4727 generic.go:334] "Generic (PLEG): container finished" podID="552d0b06-98da-4bb3-b86d-4a3ac341ad99" containerID="e8abaadfb15b668f03c376f6de8e31b0f64b3232fdd096d22f00dcfa86a95523" exitCode=0 Nov 21 20:10:48 crc kubenswrapper[4727]: I1121 20:10:48.272001 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqhv2" event={"ID":"552d0b06-98da-4bb3-b86d-4a3ac341ad99","Type":"ContainerDied","Data":"e8abaadfb15b668f03c376f6de8e31b0f64b3232fdd096d22f00dcfa86a95523"} Nov 21 20:10:48 crc kubenswrapper[4727]: I1121 20:10:48.272046 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqhv2" event={"ID":"552d0b06-98da-4bb3-b86d-4a3ac341ad99","Type":"ContainerStarted","Data":"f79113ebdc3d475deb5c67c8513a770d43489a5d2fa96db2a6bf694473138766"} Nov 21 20:10:48 crc kubenswrapper[4727]: I1121 20:10:48.274388 4727 generic.go:334] "Generic (PLEG): container finished" podID="7cbf0484-db14-4c19-8944-5da1652fe052" containerID="627d265b199a679464ad975e20c618e2f063644ff0ccd9e806c112d7457ccf8e" exitCode=0 Nov 21 20:10:48 crc kubenswrapper[4727]: I1121 20:10:48.274488 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8vql" event={"ID":"7cbf0484-db14-4c19-8944-5da1652fe052","Type":"ContainerDied","Data":"627d265b199a679464ad975e20c618e2f063644ff0ccd9e806c112d7457ccf8e"} Nov 21 20:10:48 crc kubenswrapper[4727]: I1121 20:10:48.274539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8vql" event={"ID":"7cbf0484-db14-4c19-8944-5da1652fe052","Type":"ContainerStarted","Data":"a4c1728a15eae355c3dcc7f4b7b67b60805f1d4fb18ad2f1f834e683deccefcc"} Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.281454 4727 generic.go:334] "Generic (PLEG): container finished" podID="7cbf0484-db14-4c19-8944-5da1652fe052" containerID="cff82c1ec7d5b133f7b158c3835322cf3b72444ad5aae9f8768446622775f382" exitCode=0 Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.281558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8vql" event={"ID":"7cbf0484-db14-4c19-8944-5da1652fe052","Type":"ContainerDied","Data":"cff82c1ec7d5b133f7b158c3835322cf3b72444ad5aae9f8768446622775f382"} Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.283835 4727 generic.go:334] "Generic (PLEG): container finished" podID="552d0b06-98da-4bb3-b86d-4a3ac341ad99" containerID="d8eaae92de7f4c1539ed53ed93b0931e5560e53112beb538695e8af5a6e09d42" exitCode=0 Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.283876 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqhv2" event={"ID":"552d0b06-98da-4bb3-b86d-4a3ac341ad99","Type":"ContainerDied","Data":"d8eaae92de7f4c1539ed53ed93b0931e5560e53112beb538695e8af5a6e09d42"} Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.425071 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nzkrs"] Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.427171 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.429299 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.440581 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzkrs"] Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.585859 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8370af5d-5665-4627-98b2-f0df95797a4f-catalog-content\") pod \"redhat-marketplace-nzkrs\" (UID: \"8370af5d-5665-4627-98b2-f0df95797a4f\") " pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.586461 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8370af5d-5665-4627-98b2-f0df95797a4f-utilities\") pod \"redhat-marketplace-nzkrs\" (UID: \"8370af5d-5665-4627-98b2-f0df95797a4f\") " pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.586591 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kvbb\" (UniqueName: \"kubernetes.io/projected/8370af5d-5665-4627-98b2-f0df95797a4f-kube-api-access-7kvbb\") pod \"redhat-marketplace-nzkrs\" (UID: \"8370af5d-5665-4627-98b2-f0df95797a4f\") " pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.633156 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x9vd7"] Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.634573 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.645161 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.661877 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x9vd7"] Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.688394 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8370af5d-5665-4627-98b2-f0df95797a4f-utilities\") pod \"redhat-marketplace-nzkrs\" (UID: \"8370af5d-5665-4627-98b2-f0df95797a4f\") " pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.688452 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kvbb\" (UniqueName: \"kubernetes.io/projected/8370af5d-5665-4627-98b2-f0df95797a4f-kube-api-access-7kvbb\") pod \"redhat-marketplace-nzkrs\" (UID: \"8370af5d-5665-4627-98b2-f0df95797a4f\") " pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.688510 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8370af5d-5665-4627-98b2-f0df95797a4f-catalog-content\") pod \"redhat-marketplace-nzkrs\" (UID: \"8370af5d-5665-4627-98b2-f0df95797a4f\") " pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.689146 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8370af5d-5665-4627-98b2-f0df95797a4f-utilities\") pod \"redhat-marketplace-nzkrs\" (UID: \"8370af5d-5665-4627-98b2-f0df95797a4f\") " pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.689191 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8370af5d-5665-4627-98b2-f0df95797a4f-catalog-content\") pod \"redhat-marketplace-nzkrs\" (UID: \"8370af5d-5665-4627-98b2-f0df95797a4f\") " pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.735716 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kvbb\" (UniqueName: \"kubernetes.io/projected/8370af5d-5665-4627-98b2-f0df95797a4f-kube-api-access-7kvbb\") pod \"redhat-marketplace-nzkrs\" (UID: \"8370af5d-5665-4627-98b2-f0df95797a4f\") " pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.741646 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.789890 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285954a8-1cba-4390-bf20-4fdf85ba2b48-utilities\") pod \"redhat-operators-x9vd7\" (UID: \"285954a8-1cba-4390-bf20-4fdf85ba2b48\") " pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.789922 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285954a8-1cba-4390-bf20-4fdf85ba2b48-catalog-content\") pod \"redhat-operators-x9vd7\" (UID: \"285954a8-1cba-4390-bf20-4fdf85ba2b48\") " pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.789975 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmzzg\" (UniqueName: \"kubernetes.io/projected/285954a8-1cba-4390-bf20-4fdf85ba2b48-kube-api-access-xmzzg\") pod \"redhat-operators-x9vd7\" (UID: \"285954a8-1cba-4390-bf20-4fdf85ba2b48\") " pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.891655 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmzzg\" (UniqueName: \"kubernetes.io/projected/285954a8-1cba-4390-bf20-4fdf85ba2b48-kube-api-access-xmzzg\") pod \"redhat-operators-x9vd7\" (UID: \"285954a8-1cba-4390-bf20-4fdf85ba2b48\") " pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.892039 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285954a8-1cba-4390-bf20-4fdf85ba2b48-utilities\") pod \"redhat-operators-x9vd7\" (UID: \"285954a8-1cba-4390-bf20-4fdf85ba2b48\") " pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.892060 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285954a8-1cba-4390-bf20-4fdf85ba2b48-catalog-content\") pod \"redhat-operators-x9vd7\" (UID: \"285954a8-1cba-4390-bf20-4fdf85ba2b48\") " pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.892467 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285954a8-1cba-4390-bf20-4fdf85ba2b48-catalog-content\") pod \"redhat-operators-x9vd7\" (UID: \"285954a8-1cba-4390-bf20-4fdf85ba2b48\") " pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.892665 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285954a8-1cba-4390-bf20-4fdf85ba2b48-utilities\") pod \"redhat-operators-x9vd7\" (UID: \"285954a8-1cba-4390-bf20-4fdf85ba2b48\") " pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.923910 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzkrs"] Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.924020 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmzzg\" (UniqueName: \"kubernetes.io/projected/285954a8-1cba-4390-bf20-4fdf85ba2b48-kube-api-access-xmzzg\") pod \"redhat-operators-x9vd7\" (UID: \"285954a8-1cba-4390-bf20-4fdf85ba2b48\") " pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:49 crc kubenswrapper[4727]: W1121 20:10:49.933811 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8370af5d_5665_4627_98b2_f0df95797a4f.slice/crio-ef98dcf34ee08d8ca1b852c329e6862a0cd5c6234e569d34c85f9aedb25c78dd WatchSource:0}: Error finding container ef98dcf34ee08d8ca1b852c329e6862a0cd5c6234e569d34c85f9aedb25c78dd: Status 404 returned error can't find the container with id ef98dcf34ee08d8ca1b852c329e6862a0cd5c6234e569d34c85f9aedb25c78dd Nov 21 20:10:49 crc kubenswrapper[4727]: I1121 20:10:49.965475 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:50 crc kubenswrapper[4727]: I1121 20:10:50.291553 4727 generic.go:334] "Generic (PLEG): container finished" podID="8370af5d-5665-4627-98b2-f0df95797a4f" containerID="d5ba637d655cef71c80e38a7f54a387f050047fdc2e96746323f8ceb11c1e34c" exitCode=0 Nov 21 20:10:50 crc kubenswrapper[4727]: I1121 20:10:50.291662 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzkrs" event={"ID":"8370af5d-5665-4627-98b2-f0df95797a4f","Type":"ContainerDied","Data":"d5ba637d655cef71c80e38a7f54a387f050047fdc2e96746323f8ceb11c1e34c"} Nov 21 20:10:50 crc kubenswrapper[4727]: I1121 20:10:50.291983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzkrs" event={"ID":"8370af5d-5665-4627-98b2-f0df95797a4f","Type":"ContainerStarted","Data":"ef98dcf34ee08d8ca1b852c329e6862a0cd5c6234e569d34c85f9aedb25c78dd"} Nov 21 20:10:50 crc kubenswrapper[4727]: I1121 20:10:50.295207 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqhv2" event={"ID":"552d0b06-98da-4bb3-b86d-4a3ac341ad99","Type":"ContainerStarted","Data":"adf7299b31a7c6dda069aaa0d02d3c663b26921f3bc9f9f61b50e4d63236484d"} Nov 21 20:10:50 crc kubenswrapper[4727]: I1121 20:10:50.298158 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8vql" event={"ID":"7cbf0484-db14-4c19-8944-5da1652fe052","Type":"ContainerStarted","Data":"cf53616bbb2293a85ea984a4a14a896fef1c8657a92248cefd1429187b47d95e"} Nov 21 20:10:50 crc kubenswrapper[4727]: I1121 20:10:50.331790 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mqhv2" podStartSLOduration=1.8844897299999999 podStartE2EDuration="3.331767859s" podCreationTimestamp="2025-11-21 20:10:47 +0000 UTC" firstStartedPulling="2025-11-21 20:10:48.275201525 +0000 UTC m=+253.461386569" lastFinishedPulling="2025-11-21 20:10:49.722479654 +0000 UTC m=+254.908664698" observedRunningTime="2025-11-21 20:10:50.330919693 +0000 UTC m=+255.517104747" watchObservedRunningTime="2025-11-21 20:10:50.331767859 +0000 UTC m=+255.517952903" Nov 21 20:10:50 crc kubenswrapper[4727]: I1121 20:10:50.356452 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n8vql" podStartSLOduration=1.936105709 podStartE2EDuration="3.356436818s" podCreationTimestamp="2025-11-21 20:10:47 +0000 UTC" firstStartedPulling="2025-11-21 20:10:48.275821654 +0000 UTC m=+253.462006728" lastFinishedPulling="2025-11-21 20:10:49.696152793 +0000 UTC m=+254.882337837" observedRunningTime="2025-11-21 20:10:50.354215698 +0000 UTC m=+255.540400752" watchObservedRunningTime="2025-11-21 20:10:50.356436818 +0000 UTC m=+255.542621862" Nov 21 20:10:50 crc kubenswrapper[4727]: I1121 20:10:50.403029 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x9vd7"] Nov 21 20:10:50 crc kubenswrapper[4727]: W1121 20:10:50.407328 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod285954a8_1cba_4390_bf20_4fdf85ba2b48.slice/crio-43e74c24a8ffe7590b4f5a1293c531c601a2d1f71d90878739a4b8619b5421a7 WatchSource:0}: Error finding container 43e74c24a8ffe7590b4f5a1293c531c601a2d1f71d90878739a4b8619b5421a7: Status 404 returned error can't find the container with id 43e74c24a8ffe7590b4f5a1293c531c601a2d1f71d90878739a4b8619b5421a7 Nov 21 20:10:51 crc kubenswrapper[4727]: I1121 20:10:51.306646 4727 generic.go:334] "Generic (PLEG): container finished" podID="8370af5d-5665-4627-98b2-f0df95797a4f" containerID="af5f6b97a463ec9e0af109f19c1631c270465dc75a69cb1561ad7c28ea4ea2cc" exitCode=0 Nov 21 20:10:51 crc kubenswrapper[4727]: I1121 20:10:51.306750 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzkrs" event={"ID":"8370af5d-5665-4627-98b2-f0df95797a4f","Type":"ContainerDied","Data":"af5f6b97a463ec9e0af109f19c1631c270465dc75a69cb1561ad7c28ea4ea2cc"} Nov 21 20:10:51 crc kubenswrapper[4727]: I1121 20:10:51.310316 4727 generic.go:334] "Generic (PLEG): container finished" podID="285954a8-1cba-4390-bf20-4fdf85ba2b48" containerID="87b4fdb956f1c21dd4f7ba4966f708a4513d73c747a657348281691bdbdeddf8" exitCode=0 Nov 21 20:10:51 crc kubenswrapper[4727]: I1121 20:10:51.310434 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x9vd7" event={"ID":"285954a8-1cba-4390-bf20-4fdf85ba2b48","Type":"ContainerDied","Data":"87b4fdb956f1c21dd4f7ba4966f708a4513d73c747a657348281691bdbdeddf8"} Nov 21 20:10:51 crc kubenswrapper[4727]: I1121 20:10:51.310480 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x9vd7" event={"ID":"285954a8-1cba-4390-bf20-4fdf85ba2b48","Type":"ContainerStarted","Data":"43e74c24a8ffe7590b4f5a1293c531c601a2d1f71d90878739a4b8619b5421a7"} Nov 21 20:10:52 crc kubenswrapper[4727]: I1121 20:10:52.319589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x9vd7" event={"ID":"285954a8-1cba-4390-bf20-4fdf85ba2b48","Type":"ContainerStarted","Data":"6ef21f24d8202123a65ac68d5d742bfb4c3c9cbc11d44284f01532b6797211a8"} Nov 21 20:10:52 crc kubenswrapper[4727]: I1121 20:10:52.322416 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzkrs" event={"ID":"8370af5d-5665-4627-98b2-f0df95797a4f","Type":"ContainerStarted","Data":"cbb7b59e5cb9466eba11b247fa67b05e2902b3aec2d3b7a0c46916ef498a0d6b"} Nov 21 20:10:52 crc kubenswrapper[4727]: I1121 20:10:52.358245 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nzkrs" podStartSLOduration=1.8955963470000001 podStartE2EDuration="3.358217335s" podCreationTimestamp="2025-11-21 20:10:49 +0000 UTC" firstStartedPulling="2025-11-21 20:10:50.294108685 +0000 UTC m=+255.480293729" lastFinishedPulling="2025-11-21 20:10:51.756729673 +0000 UTC m=+256.942914717" observedRunningTime="2025-11-21 20:10:52.349256235 +0000 UTC m=+257.535441299" watchObservedRunningTime="2025-11-21 20:10:52.358217335 +0000 UTC m=+257.544402379" Nov 21 20:10:53 crc kubenswrapper[4727]: I1121 20:10:53.329441 4727 generic.go:334] "Generic (PLEG): container finished" podID="285954a8-1cba-4390-bf20-4fdf85ba2b48" containerID="6ef21f24d8202123a65ac68d5d742bfb4c3c9cbc11d44284f01532b6797211a8" exitCode=0 Nov 21 20:10:53 crc kubenswrapper[4727]: I1121 20:10:53.329550 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x9vd7" event={"ID":"285954a8-1cba-4390-bf20-4fdf85ba2b48","Type":"ContainerDied","Data":"6ef21f24d8202123a65ac68d5d742bfb4c3c9cbc11d44284f01532b6797211a8"} Nov 21 20:10:54 crc kubenswrapper[4727]: I1121 20:10:54.337083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x9vd7" event={"ID":"285954a8-1cba-4390-bf20-4fdf85ba2b48","Type":"ContainerStarted","Data":"caf52f8883dbdb7358c829a52f49594101ae9d2cbadd26cf31b2efacfbf7c337"} Nov 21 20:10:54 crc kubenswrapper[4727]: I1121 20:10:54.351804 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x9vd7" podStartSLOduration=2.553199779 podStartE2EDuration="5.351787905s" podCreationTimestamp="2025-11-21 20:10:49 +0000 UTC" firstStartedPulling="2025-11-21 20:10:51.312062921 +0000 UTC m=+256.498247965" lastFinishedPulling="2025-11-21 20:10:54.110651047 +0000 UTC m=+259.296836091" observedRunningTime="2025-11-21 20:10:54.350775044 +0000 UTC m=+259.536960098" watchObservedRunningTime="2025-11-21 20:10:54.351787905 +0000 UTC m=+259.537972939" Nov 21 20:10:57 crc kubenswrapper[4727]: I1121 20:10:57.339987 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:57 crc kubenswrapper[4727]: I1121 20:10:57.340504 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:57 crc kubenswrapper[4727]: I1121 20:10:57.392655 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:57 crc kubenswrapper[4727]: I1121 20:10:57.433021 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mqhv2" Nov 21 20:10:57 crc kubenswrapper[4727]: I1121 20:10:57.546920 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:57 crc kubenswrapper[4727]: I1121 20:10:57.546968 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:57 crc kubenswrapper[4727]: I1121 20:10:57.585449 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:58 crc kubenswrapper[4727]: I1121 20:10:58.403988 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n8vql" Nov 21 20:10:59 crc kubenswrapper[4727]: I1121 20:10:59.742542 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:59 crc kubenswrapper[4727]: I1121 20:10:59.742907 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:59 crc kubenswrapper[4727]: I1121 20:10:59.790087 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:10:59 crc kubenswrapper[4727]: I1121 20:10:59.966732 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:10:59 crc kubenswrapper[4727]: I1121 20:10:59.966896 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:11:00 crc kubenswrapper[4727]: I1121 20:11:00.003204 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:11:00 crc kubenswrapper[4727]: I1121 20:11:00.407903 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x9vd7" Nov 21 20:11:00 crc kubenswrapper[4727]: I1121 20:11:00.417276 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nzkrs" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.138789 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch"] Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.140045 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.142585 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.143293 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.143478 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.143519 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.143642 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.149546 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch"] Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.242178 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqgg\" (UniqueName: \"kubernetes.io/projected/65f1dd22-d654-4c5e-8f7b-066d08e7075e-kube-api-access-rcqgg\") pod \"cluster-monitoring-operator-6d5b84845-dnpch\" (UID: \"65f1dd22-d654-4c5e-8f7b-066d08e7075e\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.242251 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/65f1dd22-d654-4c5e-8f7b-066d08e7075e-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-dnpch\" (UID: \"65f1dd22-d654-4c5e-8f7b-066d08e7075e\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.242366 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/65f1dd22-d654-4c5e-8f7b-066d08e7075e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-dnpch\" (UID: \"65f1dd22-d654-4c5e-8f7b-066d08e7075e\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.343588 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcqgg\" (UniqueName: \"kubernetes.io/projected/65f1dd22-d654-4c5e-8f7b-066d08e7075e-kube-api-access-rcqgg\") pod \"cluster-monitoring-operator-6d5b84845-dnpch\" (UID: \"65f1dd22-d654-4c5e-8f7b-066d08e7075e\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.343643 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/65f1dd22-d654-4c5e-8f7b-066d08e7075e-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-dnpch\" (UID: \"65f1dd22-d654-4c5e-8f7b-066d08e7075e\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.343713 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/65f1dd22-d654-4c5e-8f7b-066d08e7075e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-dnpch\" (UID: \"65f1dd22-d654-4c5e-8f7b-066d08e7075e\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.345200 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/65f1dd22-d654-4c5e-8f7b-066d08e7075e-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-dnpch\" (UID: \"65f1dd22-d654-4c5e-8f7b-066d08e7075e\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.349270 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/65f1dd22-d654-4c5e-8f7b-066d08e7075e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-dnpch\" (UID: \"65f1dd22-d654-4c5e-8f7b-066d08e7075e\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.360793 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcqgg\" (UniqueName: \"kubernetes.io/projected/65f1dd22-d654-4c5e-8f7b-066d08e7075e-kube-api-access-rcqgg\") pod \"cluster-monitoring-operator-6d5b84845-dnpch\" (UID: \"65f1dd22-d654-4c5e-8f7b-066d08e7075e\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.457003 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" Nov 21 20:11:15 crc kubenswrapper[4727]: I1121 20:11:15.837043 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch"] Nov 21 20:11:16 crc kubenswrapper[4727]: I1121 20:11:16.456142 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" event={"ID":"65f1dd22-d654-4c5e-8f7b-066d08e7075e","Type":"ContainerStarted","Data":"df1feb668f8f8c2288bf0c87e7ed8b405db4357eb4419b5aa356fde13f91a737"} Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.089522 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zrgk2"] Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.090831 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.103743 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zrgk2"] Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.212567 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt"] Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.213320 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.215870 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.216154 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-pzgj7" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.217713 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt"] Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.275846 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.275936 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pqkh\" (UniqueName: \"kubernetes.io/projected/dc7961a5-baaf-4982-a362-fd4cba660bea-kube-api-access-4pqkh\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.276045 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc7961a5-baaf-4982-a362-fd4cba660bea-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.277194 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc7961a5-baaf-4982-a362-fd4cba660bea-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.277225 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc7961a5-baaf-4982-a362-fd4cba660bea-registry-certificates\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.277262 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc7961a5-baaf-4982-a362-fd4cba660bea-trusted-ca\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.277301 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc7961a5-baaf-4982-a362-fd4cba660bea-registry-tls\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.277326 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc7961a5-baaf-4982-a362-fd4cba660bea-bound-sa-token\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.302468 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.378794 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pqkh\" (UniqueName: \"kubernetes.io/projected/dc7961a5-baaf-4982-a362-fd4cba660bea-kube-api-access-4pqkh\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.379665 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc7961a5-baaf-4982-a362-fd4cba660bea-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.380601 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc7961a5-baaf-4982-a362-fd4cba660bea-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.380738 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc7961a5-baaf-4982-a362-fd4cba660bea-registry-certificates\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.380868 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc7961a5-baaf-4982-a362-fd4cba660bea-trusted-ca\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.380999 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc7961a5-baaf-4982-a362-fd4cba660bea-registry-tls\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.381106 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc7961a5-baaf-4982-a362-fd4cba660bea-bound-sa-token\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.381186 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0dbfc629-6fb6-49a0-a834-28ab82b07c75-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-24hpt\" (UID: \"0dbfc629-6fb6-49a0-a834-28ab82b07c75\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.382239 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc7961a5-baaf-4982-a362-fd4cba660bea-trusted-ca\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.382405 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc7961a5-baaf-4982-a362-fd4cba660bea-registry-certificates\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.382745 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc7961a5-baaf-4982-a362-fd4cba660bea-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.385394 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc7961a5-baaf-4982-a362-fd4cba660bea-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.386073 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc7961a5-baaf-4982-a362-fd4cba660bea-registry-tls\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.395728 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pqkh\" (UniqueName: \"kubernetes.io/projected/dc7961a5-baaf-4982-a362-fd4cba660bea-kube-api-access-4pqkh\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.401754 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc7961a5-baaf-4982-a362-fd4cba660bea-bound-sa-token\") pod \"image-registry-66df7c8f76-zrgk2\" (UID: \"dc7961a5-baaf-4982-a362-fd4cba660bea\") " pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.406678 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.474926 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" event={"ID":"65f1dd22-d654-4c5e-8f7b-066d08e7075e","Type":"ContainerStarted","Data":"cabdfe5358cb4c6cce828894ac997afeefbfd258f659c82d58a148e9b1c656e9"} Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.482722 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0dbfc629-6fb6-49a0-a834-28ab82b07c75-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-24hpt\" (UID: \"0dbfc629-6fb6-49a0-a834-28ab82b07c75\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.489440 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0dbfc629-6fb6-49a0-a834-28ab82b07c75-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-24hpt\" (UID: \"0dbfc629-6fb6-49a0-a834-28ab82b07c75\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.494196 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-dnpch" podStartSLOduration=1.706424685 podStartE2EDuration="3.494178337s" podCreationTimestamp="2025-11-21 20:11:15 +0000 UTC" firstStartedPulling="2025-11-21 20:11:15.848730912 +0000 UTC m=+281.034915956" lastFinishedPulling="2025-11-21 20:11:17.636484564 +0000 UTC m=+282.822669608" observedRunningTime="2025-11-21 20:11:18.490892578 +0000 UTC m=+283.677077622" watchObservedRunningTime="2025-11-21 20:11:18.494178337 +0000 UTC m=+283.680363381" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.532502 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.795080 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zrgk2"] Nov 21 20:11:18 crc kubenswrapper[4727]: W1121 20:11:18.802681 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc7961a5_baaf_4982_a362_fd4cba660bea.slice/crio-88271394bc5c4871035281757855831efd4d87c1112bd380230b494d5c94088e WatchSource:0}: Error finding container 88271394bc5c4871035281757855831efd4d87c1112bd380230b494d5c94088e: Status 404 returned error can't find the container with id 88271394bc5c4871035281757855831efd4d87c1112bd380230b494d5c94088e Nov 21 20:11:18 crc kubenswrapper[4727]: I1121 20:11:18.899197 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt"] Nov 21 20:11:19 crc kubenswrapper[4727]: I1121 20:11:19.480608 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" event={"ID":"dc7961a5-baaf-4982-a362-fd4cba660bea","Type":"ContainerStarted","Data":"126ed579e05830d3f307033dd8f1717f9a5afe6e973870e6914e7a4a397906e3"} Nov 21 20:11:19 crc kubenswrapper[4727]: I1121 20:11:19.481136 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" event={"ID":"dc7961a5-baaf-4982-a362-fd4cba660bea","Type":"ContainerStarted","Data":"88271394bc5c4871035281757855831efd4d87c1112bd380230b494d5c94088e"} Nov 21 20:11:19 crc kubenswrapper[4727]: I1121 20:11:19.481151 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:19 crc kubenswrapper[4727]: I1121 20:11:19.482039 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" event={"ID":"0dbfc629-6fb6-49a0-a834-28ab82b07c75","Type":"ContainerStarted","Data":"751c3d3441afbdb48a55828b61dcf7602dc556c8e4a1bd366690a93ef38be69e"} Nov 21 20:11:19 crc kubenswrapper[4727]: I1121 20:11:19.505379 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" podStartSLOduration=1.5051784019999999 podStartE2EDuration="1.505178402s" podCreationTimestamp="2025-11-21 20:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:11:19.498366468 +0000 UTC m=+284.684551512" watchObservedRunningTime="2025-11-21 20:11:19.505178402 +0000 UTC m=+284.691363456" Nov 21 20:11:20 crc kubenswrapper[4727]: I1121 20:11:20.490835 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" event={"ID":"0dbfc629-6fb6-49a0-a834-28ab82b07c75","Type":"ContainerStarted","Data":"d963890d61226d7cbad1ca07f2b97f98558f76ef132faea3f33e41ffaa98b8d6"} Nov 21 20:11:20 crc kubenswrapper[4727]: I1121 20:11:20.504731 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" podStartSLOduration=1.208979573 podStartE2EDuration="2.504715427s" podCreationTimestamp="2025-11-21 20:11:18 +0000 UTC" firstStartedPulling="2025-11-21 20:11:18.90537016 +0000 UTC m=+284.091555204" lastFinishedPulling="2025-11-21 20:11:20.201106014 +0000 UTC m=+285.387291058" observedRunningTime="2025-11-21 20:11:20.502363293 +0000 UTC m=+285.688548337" watchObservedRunningTime="2025-11-21 20:11:20.504715427 +0000 UTC m=+285.690900471" Nov 21 20:11:21 crc kubenswrapper[4727]: I1121 20:11:21.495358 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" Nov 21 20:11:21 crc kubenswrapper[4727]: I1121 20:11:21.505108 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.265038 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-4xzqq"] Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.282886 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-4xzqq"] Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.283004 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.288874 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.288897 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-g4hdc" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.288897 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.288986 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.436792 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75a1ccac-2897-40b2-90ca-9c53750c1255-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.436861 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75a1ccac-2897-40b2-90ca-9c53750c1255-metrics-client-ca\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.436983 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/75a1ccac-2897-40b2-90ca-9c53750c1255-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.437212 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7fsw\" (UniqueName: \"kubernetes.io/projected/75a1ccac-2897-40b2-90ca-9c53750c1255-kube-api-access-x7fsw\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.538726 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75a1ccac-2897-40b2-90ca-9c53750c1255-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.538794 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75a1ccac-2897-40b2-90ca-9c53750c1255-metrics-client-ca\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.538840 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/75a1ccac-2897-40b2-90ca-9c53750c1255-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.538983 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7fsw\" (UniqueName: \"kubernetes.io/projected/75a1ccac-2897-40b2-90ca-9c53750c1255-kube-api-access-x7fsw\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.540740 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75a1ccac-2897-40b2-90ca-9c53750c1255-metrics-client-ca\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.546894 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75a1ccac-2897-40b2-90ca-9c53750c1255-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.548059 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/75a1ccac-2897-40b2-90ca-9c53750c1255-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.555349 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7fsw\" (UniqueName: \"kubernetes.io/projected/75a1ccac-2897-40b2-90ca-9c53750c1255-kube-api-access-x7fsw\") pod \"prometheus-operator-db54df47d-4xzqq\" (UID: \"75a1ccac-2897-40b2-90ca-9c53750c1255\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.604305 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" Nov 21 20:11:22 crc kubenswrapper[4727]: I1121 20:11:22.979886 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-4xzqq"] Nov 21 20:11:22 crc kubenswrapper[4727]: W1121 20:11:22.986521 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75a1ccac_2897_40b2_90ca_9c53750c1255.slice/crio-7b7ce110c209054e67ad33703feb8c9d406cbb73e71efd079acee8fe232486ed WatchSource:0}: Error finding container 7b7ce110c209054e67ad33703feb8c9d406cbb73e71efd079acee8fe232486ed: Status 404 returned error can't find the container with id 7b7ce110c209054e67ad33703feb8c9d406cbb73e71efd079acee8fe232486ed Nov 21 20:11:23 crc kubenswrapper[4727]: I1121 20:11:23.506083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" event={"ID":"75a1ccac-2897-40b2-90ca-9c53750c1255","Type":"ContainerStarted","Data":"7b7ce110c209054e67ad33703feb8c9d406cbb73e71efd079acee8fe232486ed"} Nov 21 20:11:25 crc kubenswrapper[4727]: I1121 20:11:25.518989 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" event={"ID":"75a1ccac-2897-40b2-90ca-9c53750c1255","Type":"ContainerStarted","Data":"caf7fe1ad6aac074d89e68658b8b5244ae36b6bbd2f67eed49579e039fddf271"} Nov 21 20:11:25 crc kubenswrapper[4727]: I1121 20:11:25.519329 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" event={"ID":"75a1ccac-2897-40b2-90ca-9c53750c1255","Type":"ContainerStarted","Data":"51d9478ca4eca367abe63cff7d7ed371a451010ed7e41ab6b4ad1ac8748116f7"} Nov 21 20:11:25 crc kubenswrapper[4727]: I1121 20:11:25.537370 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-4xzqq" podStartSLOduration=1.683496031 podStartE2EDuration="3.537354598s" podCreationTimestamp="2025-11-21 20:11:22 +0000 UTC" firstStartedPulling="2025-11-21 20:11:22.988573295 +0000 UTC m=+288.174758339" lastFinishedPulling="2025-11-21 20:11:24.842431862 +0000 UTC m=+290.028616906" observedRunningTime="2025-11-21 20:11:25.535580799 +0000 UTC m=+290.721765843" watchObservedRunningTime="2025-11-21 20:11:25.537354598 +0000 UTC m=+290.723539632" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.592333 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-d7742"] Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.593658 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.596411 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-r8hhc" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.596879 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.597469 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.614243 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-d7742"] Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.641347 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-m2pcp"] Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.642776 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.648466 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-tks8d" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.648714 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.648810 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.655491 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb"] Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.656433 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.658429 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.658475 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.659764 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.665409 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-pvrwh" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.686979 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb"] Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.706629 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.706700 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcd48\" (UniqueName: \"kubernetes.io/projected/df689615-6b68-4fd1-8496-d65036ba207c-kube-api-access-dcd48\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.706759 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tql5k\" (UniqueName: \"kubernetes.io/projected/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-kube-api-access-tql5k\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.706787 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.706842 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-textfile\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.706974 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-tls\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.707012 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.707067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/df689615-6b68-4fd1-8496-d65036ba207c-sys\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.707087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/df689615-6b68-4fd1-8496-d65036ba207c-root\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.707132 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/df689615-6b68-4fd1-8496-d65036ba207c-metrics-client-ca\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.707164 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-wtmp\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.707220 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.808161 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-tls\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: E1121 20:11:27.808332 4727 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.808516 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: E1121 20:11:27.808582 4727 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Nov 21 20:11:27 crc kubenswrapper[4727]: E1121 20:11:27.808584 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-tls podName:df689615-6b68-4fd1-8496-d65036ba207c nodeName:}" failed. No retries permitted until 2025-11-21 20:11:28.308561213 +0000 UTC m=+293.494746267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-tls") pod "node-exporter-m2pcp" (UID: "df689615-6b68-4fd1-8496-d65036ba207c") : secret "node-exporter-tls" not found Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.808777 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/df689615-6b68-4fd1-8496-d65036ba207c-sys\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.808816 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/df689615-6b68-4fd1-8496-d65036ba207c-root\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: E1121 20:11:27.808842 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-openshift-state-metrics-tls podName:7f83e601-7cf0-451b-b823-f2a1aaaa76ad nodeName:}" failed. No retries permitted until 2025-11-21 20:11:28.30882871 +0000 UTC m=+293.495013814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-d7742" (UID: "7f83e601-7cf0-451b-b823-f2a1aaaa76ad") : secret "openshift-state-metrics-tls" not found Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.808856 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/df689615-6b68-4fd1-8496-d65036ba207c-sys\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.808873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/df689615-6b68-4fd1-8496-d65036ba207c-root\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.808880 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/df689615-6b68-4fd1-8496-d65036ba207c-metrics-client-ca\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.808929 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-wtmp\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809000 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809049 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809096 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809135 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcd48\" (UniqueName: \"kubernetes.io/projected/df689615-6b68-4fd1-8496-d65036ba207c-kube-api-access-dcd48\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809151 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-wtmp\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809171 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809210 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tql5k\" (UniqueName: \"kubernetes.io/projected/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-kube-api-access-tql5k\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809236 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q267v\" (UniqueName: \"kubernetes.io/projected/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-api-access-q267v\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809273 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809307 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809356 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-textfile\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809640 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/df689615-6b68-4fd1-8496-d65036ba207c-metrics-client-ca\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-textfile\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.809947 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.810146 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.814208 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.817528 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.829516 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcd48\" (UniqueName: \"kubernetes.io/projected/df689615-6b68-4fd1-8496-d65036ba207c-kube-api-access-dcd48\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.830192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tql5k\" (UniqueName: \"kubernetes.io/projected/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-kube-api-access-tql5k\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.911653 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.911704 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.911726 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q267v\" (UniqueName: \"kubernetes.io/projected/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-api-access-q267v\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.911747 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.911767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.911791 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.912310 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.912621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.912748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.915152 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.915826 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.928017 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q267v\" (UniqueName: \"kubernetes.io/projected/dc6e2683-b34b-47a7-9f9f-35f7e948cbb2-kube-api-access-q267v\") pod \"kube-state-metrics-777cb5bd5d-vd7bb\" (UID: \"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:27 crc kubenswrapper[4727]: I1121 20:11:27.968927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.155329 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb"] Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.315887 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.316244 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-tls\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.320651 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/df689615-6b68-4fd1-8496-d65036ba207c-node-exporter-tls\") pod \"node-exporter-m2pcp\" (UID: \"df689615-6b68-4fd1-8496-d65036ba207c\") " pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.321233 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/7f83e601-7cf0-451b-b823-f2a1aaaa76ad-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-d7742\" (UID: \"7f83e601-7cf0-451b-b823-f2a1aaaa76ad\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.518456 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.539934 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" event={"ID":"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2","Type":"ContainerStarted","Data":"3a4c103985649367f271aa2dca1fd92587760cc91bdfd6ce011edfdfe83e8dcb"} Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.557444 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-m2pcp" Nov 21 20:11:28 crc kubenswrapper[4727]: W1121 20:11:28.610638 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf689615_6b68_4fd1_8496_d65036ba207c.slice/crio-f41300cce92d31111df452ae6779ee9446aabf21e92814f6f23068e1c0e632de WatchSource:0}: Error finding container f41300cce92d31111df452ae6779ee9446aabf21e92814f6f23068e1c0e632de: Status 404 returned error can't find the container with id f41300cce92d31111df452ae6779ee9446aabf21e92814f6f23068e1c0e632de Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.733900 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.738176 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.755043 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.755065 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.755094 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.755191 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.755227 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.755268 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-5gmtb" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.755303 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.755357 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.756257 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.774848 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826291 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826327 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/719c7426-3fe0-4b20-8926-f68a569e55c5-config-out\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826344 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/719c7426-3fe0-4b20-8926-f68a569e55c5-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826369 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826401 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/719c7426-3fe0-4b20-8926-f68a569e55c5-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826422 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826438 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/719c7426-3fe0-4b20-8926-f68a569e55c5-tls-assets\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826463 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-web-config\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826480 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826505 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-config-volume\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826524 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pxb5\" (UniqueName: \"kubernetes.io/projected/719c7426-3fe0-4b20-8926-f68a569e55c5-kube-api-access-8pxb5\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.826539 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/719c7426-3fe0-4b20-8926-f68a569e55c5-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.872263 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-d7742"] Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928098 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/719c7426-3fe0-4b20-8926-f68a569e55c5-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928164 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928211 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/719c7426-3fe0-4b20-8926-f68a569e55c5-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928241 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/719c7426-3fe0-4b20-8926-f68a569e55c5-tls-assets\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928267 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928307 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-web-config\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928330 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928371 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-config-volume\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928399 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pxb5\" (UniqueName: \"kubernetes.io/projected/719c7426-3fe0-4b20-8926-f68a569e55c5-kube-api-access-8pxb5\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928422 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/719c7426-3fe0-4b20-8926-f68a569e55c5-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928473 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.928503 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/719c7426-3fe0-4b20-8926-f68a569e55c5-config-out\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.930204 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/719c7426-3fe0-4b20-8926-f68a569e55c5-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.930216 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/719c7426-3fe0-4b20-8926-f68a569e55c5-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.930565 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/719c7426-3fe0-4b20-8926-f68a569e55c5-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.932816 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.932820 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-web-config\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.932808 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.933562 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.933557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/719c7426-3fe0-4b20-8926-f68a569e55c5-config-out\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.934627 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.942128 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/719c7426-3fe0-4b20-8926-f68a569e55c5-config-volume\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.943572 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/719c7426-3fe0-4b20-8926-f68a569e55c5-tls-assets\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:28 crc kubenswrapper[4727]: I1121 20:11:28.957942 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pxb5\" (UniqueName: \"kubernetes.io/projected/719c7426-3fe0-4b20-8926-f68a569e55c5-kube-api-access-8pxb5\") pod \"alertmanager-main-0\" (UID: \"719c7426-3fe0-4b20-8926-f68a569e55c5\") " pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.086414 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.486320 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.547280 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" event={"ID":"7f83e601-7cf0-451b-b823-f2a1aaaa76ad","Type":"ContainerStarted","Data":"a41a76aba96207f53f6cf6962fd802a942c8251a021ffcb4616c99762976ea90"} Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.547319 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" event={"ID":"7f83e601-7cf0-451b-b823-f2a1aaaa76ad","Type":"ContainerStarted","Data":"7d4f3fb300e0c92f32f77d0ef028ad8b5e8f723cdfb9d260ecc1db7b9c91eddf"} Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.547329 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" event={"ID":"7f83e601-7cf0-451b-b823-f2a1aaaa76ad","Type":"ContainerStarted","Data":"77e4e7a1869245ab9247072b2a424f8a13666b98282cc361e68be5a1a4546bcc"} Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.549746 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-m2pcp" event={"ID":"df689615-6b68-4fd1-8496-d65036ba207c","Type":"ContainerStarted","Data":"f41300cce92d31111df452ae6779ee9446aabf21e92814f6f23068e1c0e632de"} Nov 21 20:11:29 crc kubenswrapper[4727]: W1121 20:11:29.659287 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod719c7426_3fe0_4b20_8926_f68a569e55c5.slice/crio-bc817b29c80ec95baf21dbb6c32aa170d5fb79e19c2c3d13a37749d2a706c469 WatchSource:0}: Error finding container bc817b29c80ec95baf21dbb6c32aa170d5fb79e19c2c3d13a37749d2a706c469: Status 404 returned error can't find the container with id bc817b29c80ec95baf21dbb6c32aa170d5fb79e19c2c3d13a37749d2a706c469 Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.686094 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-865dccd599-858p9"] Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.688193 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.691517 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.691773 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.691785 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.691908 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.691974 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-dnskb" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.692066 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.692105 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-116ogp5f1j9n1" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.702452 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-865dccd599-858p9"] Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.843331 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.843660 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.843684 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjz6t\" (UniqueName: \"kubernetes.io/projected/821caed6-671a-4de1-8667-edb6f113bd5e-kube-api-access-pjz6t\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.843746 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.843771 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.843921 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/821caed6-671a-4de1-8667-edb6f113bd5e-metrics-client-ca\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.844055 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-tls\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.844115 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-grpc-tls\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.945307 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-grpc-tls\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.945383 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.945418 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.945443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjz6t\" (UniqueName: \"kubernetes.io/projected/821caed6-671a-4de1-8667-edb6f113bd5e-kube-api-access-pjz6t\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.945491 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.945518 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.945542 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/821caed6-671a-4de1-8667-edb6f113bd5e-metrics-client-ca\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.945575 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-tls\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.947326 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/821caed6-671a-4de1-8667-edb6f113bd5e-metrics-client-ca\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.951888 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.952242 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.952372 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-grpc-tls\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.956523 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.961760 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-tls\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.963658 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjz6t\" (UniqueName: \"kubernetes.io/projected/821caed6-671a-4de1-8667-edb6f113bd5e-kube-api-access-pjz6t\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:29 crc kubenswrapper[4727]: I1121 20:11:29.968171 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/821caed6-671a-4de1-8667-edb6f113bd5e-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-865dccd599-858p9\" (UID: \"821caed6-671a-4de1-8667-edb6f113bd5e\") " pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:30 crc kubenswrapper[4727]: I1121 20:11:30.009481 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:30 crc kubenswrapper[4727]: I1121 20:11:30.256804 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-865dccd599-858p9"] Nov 21 20:11:30 crc kubenswrapper[4727]: W1121 20:11:30.444624 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod821caed6_671a_4de1_8667_edb6f113bd5e.slice/crio-13a31f0c9f8f9ed69b42e36aeb3788965ea18c0488e204700f9b1dc15e3e2e6a WatchSource:0}: Error finding container 13a31f0c9f8f9ed69b42e36aeb3788965ea18c0488e204700f9b1dc15e3e2e6a: Status 404 returned error can't find the container with id 13a31f0c9f8f9ed69b42e36aeb3788965ea18c0488e204700f9b1dc15e3e2e6a Nov 21 20:11:30 crc kubenswrapper[4727]: I1121 20:11:30.560815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" event={"ID":"821caed6-671a-4de1-8667-edb6f113bd5e","Type":"ContainerStarted","Data":"13a31f0c9f8f9ed69b42e36aeb3788965ea18c0488e204700f9b1dc15e3e2e6a"} Nov 21 20:11:30 crc kubenswrapper[4727]: I1121 20:11:30.566355 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" event={"ID":"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2","Type":"ContainerStarted","Data":"3514fdbb41e1e8370491cd9778f062bcfe74acc640056093d39823a63a5cfe61"} Nov 21 20:11:30 crc kubenswrapper[4727]: I1121 20:11:30.566435 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" event={"ID":"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2","Type":"ContainerStarted","Data":"363eacf6b2cee1bef87d87237371058d3ca44fdb05a32f4b0d9041617576326e"} Nov 21 20:11:30 crc kubenswrapper[4727]: I1121 20:11:30.566450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" event={"ID":"dc6e2683-b34b-47a7-9f9f-35f7e948cbb2","Type":"ContainerStarted","Data":"395fb29095d2dd425e0ee082b7829d9edf6dc52dbf90061136416669c57a8d02"} Nov 21 20:11:30 crc kubenswrapper[4727]: I1121 20:11:30.573153 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"719c7426-3fe0-4b20-8926-f68a569e55c5","Type":"ContainerStarted","Data":"bc817b29c80ec95baf21dbb6c32aa170d5fb79e19c2c3d13a37749d2a706c469"} Nov 21 20:11:30 crc kubenswrapper[4727]: I1121 20:11:30.594796 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-vd7bb" podStartSLOduration=2.050169698 podStartE2EDuration="3.59477083s" podCreationTimestamp="2025-11-21 20:11:27 +0000 UTC" firstStartedPulling="2025-11-21 20:11:28.159500689 +0000 UTC m=+293.345685733" lastFinishedPulling="2025-11-21 20:11:29.704101821 +0000 UTC m=+294.890286865" observedRunningTime="2025-11-21 20:11:30.588977023 +0000 UTC m=+295.775162067" watchObservedRunningTime="2025-11-21 20:11:30.59477083 +0000 UTC m=+295.780955894" Nov 21 20:11:31 crc kubenswrapper[4727]: I1121 20:11:31.582763 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-m2pcp" event={"ID":"df689615-6b68-4fd1-8496-d65036ba207c","Type":"ContainerStarted","Data":"6cd02c8e97392b1fb2b945ca2ae16bd586119d6b28e60a955182a5ca2e51ea1e"} Nov 21 20:11:31 crc kubenswrapper[4727]: I1121 20:11:31.585720 4727 generic.go:334] "Generic (PLEG): container finished" podID="719c7426-3fe0-4b20-8926-f68a569e55c5" containerID="2ffcc28ff18bff34bbb88f4eedc5a2e9248c818eaff2656f61fc43f1e58318fa" exitCode=0 Nov 21 20:11:31 crc kubenswrapper[4727]: I1121 20:11:31.585790 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"719c7426-3fe0-4b20-8926-f68a569e55c5","Type":"ContainerDied","Data":"2ffcc28ff18bff34bbb88f4eedc5a2e9248c818eaff2656f61fc43f1e58318fa"} Nov 21 20:11:31 crc kubenswrapper[4727]: I1121 20:11:31.590285 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" event={"ID":"7f83e601-7cf0-451b-b823-f2a1aaaa76ad","Type":"ContainerStarted","Data":"42f28d13e8434b99a0499e1630627f90138ce3f76bedf37239b26bc7f637ac97"} Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.404855 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-d7742" podStartSLOduration=3.321605364 podStartE2EDuration="5.404836717s" podCreationTimestamp="2025-11-21 20:11:27 +0000 UTC" firstStartedPulling="2025-11-21 20:11:29.159712433 +0000 UTC m=+294.345897467" lastFinishedPulling="2025-11-21 20:11:31.242943776 +0000 UTC m=+296.429128820" observedRunningTime="2025-11-21 20:11:31.64720819 +0000 UTC m=+296.833393254" watchObservedRunningTime="2025-11-21 20:11:32.404836717 +0000 UTC m=+297.591021761" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.406564 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-78d55d4968-j8p59"] Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.407422 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.417216 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78d55d4968-j8p59"] Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.489161 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-service-ca\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.489240 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-oauth-config\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.489281 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-config\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.489419 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-oauth-serving-cert\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.489510 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-serving-cert\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.489553 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhw4\" (UniqueName: \"kubernetes.io/projected/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-kube-api-access-qrhw4\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.489578 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-trusted-ca-bundle\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.591226 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-trusted-ca-bundle\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.591278 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-service-ca\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.591331 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-oauth-config\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.591349 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-config\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.591415 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-oauth-serving-cert\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.591435 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-serving-cert\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.591456 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhw4\" (UniqueName: \"kubernetes.io/projected/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-kube-api-access-qrhw4\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.592598 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-trusted-ca-bundle\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.592832 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-service-ca\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.593391 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-oauth-serving-cert\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.593409 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-config\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.597232 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-serving-cert\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.597249 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-oauth-config\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.598548 4727 generic.go:334] "Generic (PLEG): container finished" podID="df689615-6b68-4fd1-8496-d65036ba207c" containerID="6cd02c8e97392b1fb2b945ca2ae16bd586119d6b28e60a955182a5ca2e51ea1e" exitCode=0 Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.600131 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-m2pcp" event={"ID":"df689615-6b68-4fd1-8496-d65036ba207c","Type":"ContainerDied","Data":"6cd02c8e97392b1fb2b945ca2ae16bd586119d6b28e60a955182a5ca2e51ea1e"} Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.610136 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhw4\" (UniqueName: \"kubernetes.io/projected/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-kube-api-access-qrhw4\") pod \"console-78d55d4968-j8p59\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.724426 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:32 crc kubenswrapper[4727]: I1121 20:11:32.915107 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78d55d4968-j8p59"] Nov 21 20:11:32 crc kubenswrapper[4727]: W1121 20:11:32.926837 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcbd2628_65d1_48a7_a6d8_60b4b89c8e8c.slice/crio-c069c327dee45128dfae42b653c8150260b06636ad5678312977e602fe03bef5 WatchSource:0}: Error finding container c069c327dee45128dfae42b653c8150260b06636ad5678312977e602fe03bef5: Status 404 returned error can't find the container with id c069c327dee45128dfae42b653c8150260b06636ad5678312977e602fe03bef5 Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.009975 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-84c4f65785-dz4nz"] Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.010718 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.016231 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.016390 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.016492 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.016602 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.016734 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-q9cfj" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.019940 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-pb77spcf0r3k" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.021031 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-84c4f65785-dz4nz"] Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.101927 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/0764019c-5bd1-4d65-b810-37c03cc5de12-audit-log\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.101990 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0764019c-5bd1-4d65-b810-37c03cc5de12-client-ca-bundle\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.102031 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/0764019c-5bd1-4d65-b810-37c03cc5de12-secret-metrics-server-tls\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.102059 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0764019c-5bd1-4d65-b810-37c03cc5de12-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.102077 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/0764019c-5bd1-4d65-b810-37c03cc5de12-secret-metrics-client-certs\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.102125 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56cmt\" (UniqueName: \"kubernetes.io/projected/0764019c-5bd1-4d65-b810-37c03cc5de12-kube-api-access-56cmt\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.102149 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/0764019c-5bd1-4d65-b810-37c03cc5de12-metrics-server-audit-profiles\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.203749 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0764019c-5bd1-4d65-b810-37c03cc5de12-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.203834 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/0764019c-5bd1-4d65-b810-37c03cc5de12-secret-metrics-client-certs\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.203897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56cmt\" (UniqueName: \"kubernetes.io/projected/0764019c-5bd1-4d65-b810-37c03cc5de12-kube-api-access-56cmt\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.203926 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/0764019c-5bd1-4d65-b810-37c03cc5de12-metrics-server-audit-profiles\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.204610 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/0764019c-5bd1-4d65-b810-37c03cc5de12-audit-log\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.204820 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0764019c-5bd1-4d65-b810-37c03cc5de12-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.205171 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/0764019c-5bd1-4d65-b810-37c03cc5de12-audit-log\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.205295 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/0764019c-5bd1-4d65-b810-37c03cc5de12-metrics-server-audit-profiles\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.205297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0764019c-5bd1-4d65-b810-37c03cc5de12-client-ca-bundle\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.205376 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/0764019c-5bd1-4d65-b810-37c03cc5de12-secret-metrics-server-tls\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.211183 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/0764019c-5bd1-4d65-b810-37c03cc5de12-secret-metrics-server-tls\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.212192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0764019c-5bd1-4d65-b810-37c03cc5de12-client-ca-bundle\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.214898 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/0764019c-5bd1-4d65-b810-37c03cc5de12-secret-metrics-client-certs\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.222166 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56cmt\" (UniqueName: \"kubernetes.io/projected/0764019c-5bd1-4d65-b810-37c03cc5de12-kube-api-access-56cmt\") pod \"metrics-server-84c4f65785-dz4nz\" (UID: \"0764019c-5bd1-4d65-b810-37c03cc5de12\") " pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.350439 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.376698 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q"] Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.381284 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.384905 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.385250 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.397551 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q"] Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.509347 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4da34acf-87d4-4a02-b599-e6c12d5e91ca-monitoring-plugin-cert\") pod \"monitoring-plugin-7b54c9d587-8t26q\" (UID: \"4da34acf-87d4-4a02-b599-e6c12d5e91ca\") " pod="openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.606217 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-m2pcp" event={"ID":"df689615-6b68-4fd1-8496-d65036ba207c","Type":"ContainerStarted","Data":"fa783f7e2260daa53ca69aab0cd1554da6bd3fca2ad527de26f8a5e17c59a22f"} Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.606270 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-m2pcp" event={"ID":"df689615-6b68-4fd1-8496-d65036ba207c","Type":"ContainerStarted","Data":"5a24d7d2268c314d2a3a8aaea90c66189c6ae0f885e5d1554a372e6c34ce473c"} Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.608700 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78d55d4968-j8p59" event={"ID":"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c","Type":"ContainerStarted","Data":"1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8"} Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.608738 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78d55d4968-j8p59" event={"ID":"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c","Type":"ContainerStarted","Data":"c069c327dee45128dfae42b653c8150260b06636ad5678312977e602fe03bef5"} Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.610002 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4da34acf-87d4-4a02-b599-e6c12d5e91ca-monitoring-plugin-cert\") pod \"monitoring-plugin-7b54c9d587-8t26q\" (UID: \"4da34acf-87d4-4a02-b599-e6c12d5e91ca\") " pod="openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.613700 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" event={"ID":"821caed6-671a-4de1-8667-edb6f113bd5e","Type":"ContainerStarted","Data":"8839285f935d3d8e74f970c108c1bca539d6d5f09beffa7cc32a870692be46bd"} Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.613992 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" event={"ID":"821caed6-671a-4de1-8667-edb6f113bd5e","Type":"ContainerStarted","Data":"570ddb90b1e9aff185678ed0938185812f97324f2dea73606fbde22c83d92397"} Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.614005 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" event={"ID":"821caed6-671a-4de1-8667-edb6f113bd5e","Type":"ContainerStarted","Data":"2eba236e9ecb0c333e3f331327f28b968e78e2e8b13721389876cb01b3935802"} Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.614587 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4da34acf-87d4-4a02-b599-e6c12d5e91ca-monitoring-plugin-cert\") pod \"monitoring-plugin-7b54c9d587-8t26q\" (UID: \"4da34acf-87d4-4a02-b599-e6c12d5e91ca\") " pod="openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.629626 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-m2pcp" podStartSLOduration=3.999548508 podStartE2EDuration="6.629612036s" podCreationTimestamp="2025-11-21 20:11:27 +0000 UTC" firstStartedPulling="2025-11-21 20:11:28.612859917 +0000 UTC m=+293.799044961" lastFinishedPulling="2025-11-21 20:11:31.242923445 +0000 UTC m=+296.429108489" observedRunningTime="2025-11-21 20:11:33.622989996 +0000 UTC m=+298.809175050" watchObservedRunningTime="2025-11-21 20:11:33.629612036 +0000 UTC m=+298.815797080" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.644543 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-78d55d4968-j8p59" podStartSLOduration=1.6445264210000001 podStartE2EDuration="1.644526421s" podCreationTimestamp="2025-11-21 20:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:11:33.642691942 +0000 UTC m=+298.828876996" watchObservedRunningTime="2025-11-21 20:11:33.644526421 +0000 UTC m=+298.830711455" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.699516 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.846092 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.848339 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.861254 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.861390 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.861631 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.861732 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.861884 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.862608 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.864393 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-e2jlbjqh441pk" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.866165 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.866484 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.866608 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.868026 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-jfsk4" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.868103 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.868911 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.871242 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921164 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-web-config\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921226 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921354 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921409 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921482 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921527 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921563 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2xhz\" (UniqueName: \"kubernetes.io/projected/3e50e912-934a-4c9a-8456-4885eb88d025-kube-api-access-m2xhz\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921579 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-config\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921648 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921664 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e50e912-934a-4c9a-8456-4885eb88d025-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921692 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921725 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/3e50e912-934a-4c9a-8456-4885eb88d025-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921756 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e50e912-934a-4c9a-8456-4885eb88d025-config-out\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921799 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.921918 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.922013 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.922062 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:33 crc kubenswrapper[4727]: I1121 20:11:33.922088 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.023339 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/3e50e912-934a-4c9a-8456-4885eb88d025-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024473 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/3e50e912-934a-4c9a-8456-4885eb88d025-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024466 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e50e912-934a-4c9a-8456-4885eb88d025-config-out\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024575 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024615 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024646 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024745 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024787 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024809 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-web-config\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024823 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024860 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024887 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024932 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.024995 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.025029 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2xhz\" (UniqueName: \"kubernetes.io/projected/3e50e912-934a-4c9a-8456-4885eb88d025-kube-api-access-m2xhz\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.025045 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-config\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.025111 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.025127 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e50e912-934a-4c9a-8456-4885eb88d025-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.025158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.025545 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.025701 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.027807 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e50e912-934a-4c9a-8456-4885eb88d025-config-out\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.028881 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.029055 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.029147 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.029896 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.032482 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.032951 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-web-config\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.033070 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-config\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.033566 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.034510 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.035655 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.038748 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e50e912-934a-4c9a-8456-4885eb88d025-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.046072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/3e50e912-934a-4c9a-8456-4885eb88d025-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.046138 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e50e912-934a-4c9a-8456-4885eb88d025-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.052782 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2xhz\" (UniqueName: \"kubernetes.io/projected/3e50e912-934a-4c9a-8456-4885eb88d025-kube-api-access-m2xhz\") pod \"prometheus-k8s-0\" (UID: \"3e50e912-934a-4c9a-8456-4885eb88d025\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.199291 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.275444 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q"] Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.321388 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-84c4f65785-dz4nz"] Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.625304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" event={"ID":"0764019c-5bd1-4d65-b810-37c03cc5de12","Type":"ContainerStarted","Data":"f583c38c1ae97d447986b17fae5ba2d42ed7f2d39e650ed677bad31d6dcde77e"} Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.630548 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"719c7426-3fe0-4b20-8926-f68a569e55c5","Type":"ContainerStarted","Data":"29b656c09dd9f4df5df3509f50378282592a9dc9bf5db580b4ceed99c51adbd3"} Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.630599 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"719c7426-3fe0-4b20-8926-f68a569e55c5","Type":"ContainerStarted","Data":"91771411b088a0eecf448f2d384588348ef3581d8ac23d77475aad1af532d9c3"} Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.630612 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"719c7426-3fe0-4b20-8926-f68a569e55c5","Type":"ContainerStarted","Data":"dc3f3d03a027cd6b4421d3344500664c96f7a98fcc85869d6b8de68cd20d2bb6"} Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.635660 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q" event={"ID":"4da34acf-87d4-4a02-b599-e6c12d5e91ca","Type":"ContainerStarted","Data":"90a0a5bc9cddd3b50a3f3a3cbfea74a8cab3594b68ad34c642e00d01f1913d6b"} Nov 21 20:11:34 crc kubenswrapper[4727]: I1121 20:11:34.636870 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 21 20:11:34 crc kubenswrapper[4727]: W1121 20:11:34.643667 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e50e912_934a_4c9a_8456_4885eb88d025.slice/crio-d05bae495ba3c26ae92b6711eb35e5df727160a09f8abec3bfa91c68c54554fb WatchSource:0}: Error finding container d05bae495ba3c26ae92b6711eb35e5df727160a09f8abec3bfa91c68c54554fb: Status 404 returned error can't find the container with id d05bae495ba3c26ae92b6711eb35e5df727160a09f8abec3bfa91c68c54554fb Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.290049 4727 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.644407 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"719c7426-3fe0-4b20-8926-f68a569e55c5","Type":"ContainerStarted","Data":"a0cf1bf3c0014ad2a673ef87b06565a0fd3370a2f5f81972e5211ca907935e99"} Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.644456 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"719c7426-3fe0-4b20-8926-f68a569e55c5","Type":"ContainerStarted","Data":"4ff51dbd5f1313127c761ab7d0bff7fe9390664ea7c595e1323c232246e9f109"} Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.644471 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"719c7426-3fe0-4b20-8926-f68a569e55c5","Type":"ContainerStarted","Data":"594d60e2665ab915bfb10e3349c1a1ffb159a83c9583bf0f841497090937c8f2"} Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.648006 4727 generic.go:334] "Generic (PLEG): container finished" podID="3e50e912-934a-4c9a-8456-4885eb88d025" containerID="0f28514354fb9a1d215047251a12e7a8962b224e7a340e308ed254299ba6c8f2" exitCode=0 Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.648079 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3e50e912-934a-4c9a-8456-4885eb88d025","Type":"ContainerDied","Data":"0f28514354fb9a1d215047251a12e7a8962b224e7a340e308ed254299ba6c8f2"} Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.648103 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3e50e912-934a-4c9a-8456-4885eb88d025","Type":"ContainerStarted","Data":"d05bae495ba3c26ae92b6711eb35e5df727160a09f8abec3bfa91c68c54554fb"} Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.654208 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" event={"ID":"821caed6-671a-4de1-8667-edb6f113bd5e","Type":"ContainerStarted","Data":"315ad8b52c8371f447d160f5832b4bd9e7d58674231ef7d9c8454bec328e82e8"} Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.654268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" event={"ID":"821caed6-671a-4de1-8667-edb6f113bd5e","Type":"ContainerStarted","Data":"306ae113e493bdee406da901189d6b7a7c3c2839ab907a83c690af1352e83c9d"} Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.654282 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" event={"ID":"821caed6-671a-4de1-8667-edb6f113bd5e","Type":"ContainerStarted","Data":"9623131d4bb9aa05fb9a8392f6627a5fa5147988875b4a24b8da94e6c956f95c"} Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.654333 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.686870 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.367831747 podStartE2EDuration="7.686844154s" podCreationTimestamp="2025-11-21 20:11:28 +0000 UTC" firstStartedPulling="2025-11-21 20:11:29.665132853 +0000 UTC m=+294.851317897" lastFinishedPulling="2025-11-21 20:11:33.98414526 +0000 UTC m=+299.170330304" observedRunningTime="2025-11-21 20:11:35.66830553 +0000 UTC m=+300.854490574" watchObservedRunningTime="2025-11-21 20:11:35.686844154 +0000 UTC m=+300.873029198" Nov 21 20:11:35 crc kubenswrapper[4727]: I1121 20:11:35.698258 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" podStartSLOduration=2.519533514 podStartE2EDuration="6.698242084s" podCreationTimestamp="2025-11-21 20:11:29 +0000 UTC" firstStartedPulling="2025-11-21 20:11:30.448846288 +0000 UTC m=+295.635031332" lastFinishedPulling="2025-11-21 20:11:34.627554848 +0000 UTC m=+299.813739902" observedRunningTime="2025-11-21 20:11:35.695817358 +0000 UTC m=+300.882002402" watchObservedRunningTime="2025-11-21 20:11:35.698242084 +0000 UTC m=+300.884427128" Nov 21 20:11:36 crc kubenswrapper[4727]: I1121 20:11:36.663441 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" event={"ID":"0764019c-5bd1-4d65-b810-37c03cc5de12","Type":"ContainerStarted","Data":"5568f884155ad9be7bc6e05145b9f4aef26fd417663519926bee75190d497e01"} Nov 21 20:11:36 crc kubenswrapper[4727]: I1121 20:11:36.666679 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q" event={"ID":"4da34acf-87d4-4a02-b599-e6c12d5e91ca","Type":"ContainerStarted","Data":"a6fbb40e708a274123574f5558a4b2c4cbabad4f1585fdb643d6947ecb3db160"} Nov 21 20:11:36 crc kubenswrapper[4727]: I1121 20:11:36.667330 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q" Nov 21 20:11:36 crc kubenswrapper[4727]: I1121 20:11:36.678195 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q" Nov 21 20:11:36 crc kubenswrapper[4727]: I1121 20:11:36.682216 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" podStartSLOduration=2.817301768 podStartE2EDuration="4.682193264s" podCreationTimestamp="2025-11-21 20:11:32 +0000 UTC" firstStartedPulling="2025-11-21 20:11:34.336025014 +0000 UTC m=+299.522210058" lastFinishedPulling="2025-11-21 20:11:36.20091651 +0000 UTC m=+301.387101554" observedRunningTime="2025-11-21 20:11:36.679580644 +0000 UTC m=+301.865765698" watchObservedRunningTime="2025-11-21 20:11:36.682193264 +0000 UTC m=+301.868378318" Nov 21 20:11:36 crc kubenswrapper[4727]: I1121 20:11:36.698804 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7b54c9d587-8t26q" podStartSLOduration=1.794306734 podStartE2EDuration="3.698780185s" podCreationTimestamp="2025-11-21 20:11:33 +0000 UTC" firstStartedPulling="2025-11-21 20:11:34.298109544 +0000 UTC m=+299.484294588" lastFinishedPulling="2025-11-21 20:11:36.202582995 +0000 UTC m=+301.388768039" observedRunningTime="2025-11-21 20:11:36.696834382 +0000 UTC m=+301.883019426" watchObservedRunningTime="2025-11-21 20:11:36.698780185 +0000 UTC m=+301.884965229" Nov 21 20:11:38 crc kubenswrapper[4727]: I1121 20:11:38.413283 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-zrgk2" Nov 21 20:11:38 crc kubenswrapper[4727]: I1121 20:11:38.464769 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tlqhq"] Nov 21 20:11:39 crc kubenswrapper[4727]: I1121 20:11:39.683612 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3e50e912-934a-4c9a-8456-4885eb88d025","Type":"ContainerStarted","Data":"262f6c04f846206a278ddc8d5748119f2372a748bb9fd127f4cb4638083c2134"} Nov 21 20:11:39 crc kubenswrapper[4727]: I1121 20:11:39.684202 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3e50e912-934a-4c9a-8456-4885eb88d025","Type":"ContainerStarted","Data":"f5db6815ed7f1aad8ba3665ad5b69ef16d861d3fe489657531e5792a6a92ac73"} Nov 21 20:11:39 crc kubenswrapper[4727]: I1121 20:11:39.684215 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3e50e912-934a-4c9a-8456-4885eb88d025","Type":"ContainerStarted","Data":"9bc38009d2a9e41e5a54999309d4c81999cc6b1df5807edc0d0f26e63a03d7ab"} Nov 21 20:11:39 crc kubenswrapper[4727]: I1121 20:11:39.684224 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3e50e912-934a-4c9a-8456-4885eb88d025","Type":"ContainerStarted","Data":"6a0aacc31956f3f49fe74c292f51211968c46f7cb0b0e71ba05f5c054694642e"} Nov 21 20:11:39 crc kubenswrapper[4727]: I1121 20:11:39.684232 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3e50e912-934a-4c9a-8456-4885eb88d025","Type":"ContainerStarted","Data":"5fa79176017f974520c24cd4c7fc52d00aa78f28e2948fe040326e940af998e5"} Nov 21 20:11:39 crc kubenswrapper[4727]: I1121 20:11:39.684242 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"3e50e912-934a-4c9a-8456-4885eb88d025","Type":"ContainerStarted","Data":"61dab826993129ec5fe783c8f9e3939c099c43910a5c6f342cd807c4a3fe03f6"} Nov 21 20:11:39 crc kubenswrapper[4727]: I1121 20:11:39.709523 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.69008337 podStartE2EDuration="6.709506407s" podCreationTimestamp="2025-11-21 20:11:33 +0000 UTC" firstStartedPulling="2025-11-21 20:11:35.650496218 +0000 UTC m=+300.836681252" lastFinishedPulling="2025-11-21 20:11:38.669919245 +0000 UTC m=+303.856104289" observedRunningTime="2025-11-21 20:11:39.707620666 +0000 UTC m=+304.893805720" watchObservedRunningTime="2025-11-21 20:11:39.709506407 +0000 UTC m=+304.895691451" Nov 21 20:11:40 crc kubenswrapper[4727]: I1121 20:11:40.023545 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-865dccd599-858p9" Nov 21 20:11:42 crc kubenswrapper[4727]: I1121 20:11:42.725418 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:42 crc kubenswrapper[4727]: I1121 20:11:42.725773 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:42 crc kubenswrapper[4727]: I1121 20:11:42.730360 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:43 crc kubenswrapper[4727]: I1121 20:11:43.710626 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:11:43 crc kubenswrapper[4727]: I1121 20:11:43.773655 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fj2k4"] Nov 21 20:11:44 crc kubenswrapper[4727]: I1121 20:11:44.200137 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:11:53 crc kubenswrapper[4727]: I1121 20:11:53.350986 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:11:53 crc kubenswrapper[4727]: I1121 20:11:53.352078 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.506761 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" podUID="36c9326c-5a9b-4e19-a0a7-047289e45c01" containerName="registry" containerID="cri-o://0aa40116b6d28e2aa93bd5a4026e59a68a715fb45ef82a9a3705d6776243593e" gracePeriod=30 Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.838279 4727 generic.go:334] "Generic (PLEG): container finished" podID="36c9326c-5a9b-4e19-a0a7-047289e45c01" containerID="0aa40116b6d28e2aa93bd5a4026e59a68a715fb45ef82a9a3705d6776243593e" exitCode=0 Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.838364 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" event={"ID":"36c9326c-5a9b-4e19-a0a7-047289e45c01","Type":"ContainerDied","Data":"0aa40116b6d28e2aa93bd5a4026e59a68a715fb45ef82a9a3705d6776243593e"} Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.893846 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.956470 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-bound-sa-token\") pod \"36c9326c-5a9b-4e19-a0a7-047289e45c01\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.956537 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36c9326c-5a9b-4e19-a0a7-047289e45c01-installation-pull-secrets\") pod \"36c9326c-5a9b-4e19-a0a7-047289e45c01\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.956695 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"36c9326c-5a9b-4e19-a0a7-047289e45c01\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.956726 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36c9326c-5a9b-4e19-a0a7-047289e45c01-ca-trust-extracted\") pod \"36c9326c-5a9b-4e19-a0a7-047289e45c01\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.957918 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "36c9326c-5a9b-4e19-a0a7-047289e45c01" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.958932 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-certificates\") pod \"36c9326c-5a9b-4e19-a0a7-047289e45c01\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.959276 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-tls\") pod \"36c9326c-5a9b-4e19-a0a7-047289e45c01\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.959356 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql77s\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-kube-api-access-ql77s\") pod \"36c9326c-5a9b-4e19-a0a7-047289e45c01\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.959461 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-trusted-ca\") pod \"36c9326c-5a9b-4e19-a0a7-047289e45c01\" (UID: \"36c9326c-5a9b-4e19-a0a7-047289e45c01\") " Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.960031 4727 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.960531 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "36c9326c-5a9b-4e19-a0a7-047289e45c01" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.964376 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36c9326c-5a9b-4e19-a0a7-047289e45c01-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "36c9326c-5a9b-4e19-a0a7-047289e45c01" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.964753 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "36c9326c-5a9b-4e19-a0a7-047289e45c01" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.965290 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-kube-api-access-ql77s" (OuterVolumeSpecName: "kube-api-access-ql77s") pod "36c9326c-5a9b-4e19-a0a7-047289e45c01" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01"). InnerVolumeSpecName "kube-api-access-ql77s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.968075 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "36c9326c-5a9b-4e19-a0a7-047289e45c01" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.968701 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "36c9326c-5a9b-4e19-a0a7-047289e45c01" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 20:12:03 crc kubenswrapper[4727]: I1121 20:12:03.981884 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36c9326c-5a9b-4e19-a0a7-047289e45c01-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "36c9326c-5a9b-4e19-a0a7-047289e45c01" (UID: "36c9326c-5a9b-4e19-a0a7-047289e45c01"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.061659 4727 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36c9326c-5a9b-4e19-a0a7-047289e45c01-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.061695 4727 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.061705 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql77s\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-kube-api-access-ql77s\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.061715 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36c9326c-5a9b-4e19-a0a7-047289e45c01-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.061724 4727 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36c9326c-5a9b-4e19-a0a7-047289e45c01-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.061734 4727 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36c9326c-5a9b-4e19-a0a7-047289e45c01-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.845166 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" event={"ID":"36c9326c-5a9b-4e19-a0a7-047289e45c01","Type":"ContainerDied","Data":"4707057b6c765dabfd24dc2bcab297ebb1ce74edb5c77511c78d3c1f09b0b8da"} Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.845478 4727 scope.go:117] "RemoveContainer" containerID="0aa40116b6d28e2aa93bd5a4026e59a68a715fb45ef82a9a3705d6776243593e" Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.845213 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tlqhq" Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.891855 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tlqhq"] Nov 21 20:12:04 crc kubenswrapper[4727]: I1121 20:12:04.895852 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tlqhq"] Nov 21 20:12:05 crc kubenswrapper[4727]: I1121 20:12:05.507128 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36c9326c-5a9b-4e19-a0a7-047289e45c01" path="/var/lib/kubelet/pods/36c9326c-5a9b-4e19-a0a7-047289e45c01/volumes" Nov 21 20:12:08 crc kubenswrapper[4727]: I1121 20:12:08.810689 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fj2k4" podUID="dacc1ea4-7062-46ad-a784-70537e92dc51" containerName="console" containerID="cri-o://590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794" gracePeriod=15 Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.151270 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fj2k4_dacc1ea4-7062-46ad-a784-70537e92dc51/console/0.log" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.151666 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.224978 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-oauth-config\") pod \"dacc1ea4-7062-46ad-a784-70537e92dc51\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.225038 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-service-ca\") pod \"dacc1ea4-7062-46ad-a784-70537e92dc51\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.225079 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-console-config\") pod \"dacc1ea4-7062-46ad-a784-70537e92dc51\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.225115 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-serving-cert\") pod \"dacc1ea4-7062-46ad-a784-70537e92dc51\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.225154 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-oauth-serving-cert\") pod \"dacc1ea4-7062-46ad-a784-70537e92dc51\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.225203 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9fkv\" (UniqueName: \"kubernetes.io/projected/dacc1ea4-7062-46ad-a784-70537e92dc51-kube-api-access-r9fkv\") pod \"dacc1ea4-7062-46ad-a784-70537e92dc51\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.225240 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-trusted-ca-bundle\") pod \"dacc1ea4-7062-46ad-a784-70537e92dc51\" (UID: \"dacc1ea4-7062-46ad-a784-70537e92dc51\") " Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.225817 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-console-config" (OuterVolumeSpecName: "console-config") pod "dacc1ea4-7062-46ad-a784-70537e92dc51" (UID: "dacc1ea4-7062-46ad-a784-70537e92dc51"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.225887 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dacc1ea4-7062-46ad-a784-70537e92dc51" (UID: "dacc1ea4-7062-46ad-a784-70537e92dc51"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.226081 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dacc1ea4-7062-46ad-a784-70537e92dc51" (UID: "dacc1ea4-7062-46ad-a784-70537e92dc51"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.226181 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-service-ca" (OuterVolumeSpecName: "service-ca") pod "dacc1ea4-7062-46ad-a784-70537e92dc51" (UID: "dacc1ea4-7062-46ad-a784-70537e92dc51"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.226411 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.226438 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.226451 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.226477 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dacc1ea4-7062-46ad-a784-70537e92dc51-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.230107 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dacc1ea4-7062-46ad-a784-70537e92dc51" (UID: "dacc1ea4-7062-46ad-a784-70537e92dc51"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.230224 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dacc1ea4-7062-46ad-a784-70537e92dc51-kube-api-access-r9fkv" (OuterVolumeSpecName: "kube-api-access-r9fkv") pod "dacc1ea4-7062-46ad-a784-70537e92dc51" (UID: "dacc1ea4-7062-46ad-a784-70537e92dc51"). InnerVolumeSpecName "kube-api-access-r9fkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.230527 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dacc1ea4-7062-46ad-a784-70537e92dc51" (UID: "dacc1ea4-7062-46ad-a784-70537e92dc51"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.327123 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9fkv\" (UniqueName: \"kubernetes.io/projected/dacc1ea4-7062-46ad-a784-70537e92dc51-kube-api-access-r9fkv\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.327160 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.327173 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dacc1ea4-7062-46ad-a784-70537e92dc51-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.871887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fj2k4_dacc1ea4-7062-46ad-a784-70537e92dc51/console/0.log" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.871944 4727 generic.go:334] "Generic (PLEG): container finished" podID="dacc1ea4-7062-46ad-a784-70537e92dc51" containerID="590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794" exitCode=2 Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.872009 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fj2k4" event={"ID":"dacc1ea4-7062-46ad-a784-70537e92dc51","Type":"ContainerDied","Data":"590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794"} Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.872039 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fj2k4" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.872059 4727 scope.go:117] "RemoveContainer" containerID="590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.872045 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fj2k4" event={"ID":"dacc1ea4-7062-46ad-a784-70537e92dc51","Type":"ContainerDied","Data":"233e0e1e4bfa2dcebc533459f09fea730eb2a4ae2ad3ffab0239a3d785ad7f9e"} Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.890432 4727 scope.go:117] "RemoveContainer" containerID="590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794" Nov 21 20:12:09 crc kubenswrapper[4727]: E1121 20:12:09.890797 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794\": container with ID starting with 590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794 not found: ID does not exist" containerID="590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.890840 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794"} err="failed to get container status \"590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794\": rpc error: code = NotFound desc = could not find container \"590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794\": container with ID starting with 590f893083f244aa1c0154143ec681283d6b0dbcdd1008ae35254125f20e9794 not found: ID does not exist" Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.896768 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fj2k4"] Nov 21 20:12:09 crc kubenswrapper[4727]: I1121 20:12:09.902634 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fj2k4"] Nov 21 20:12:11 crc kubenswrapper[4727]: I1121 20:12:11.506983 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dacc1ea4-7062-46ad-a784-70537e92dc51" path="/var/lib/kubelet/pods/dacc1ea4-7062-46ad-a784-70537e92dc51/volumes" Nov 21 20:12:13 crc kubenswrapper[4727]: I1121 20:12:13.356375 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:12:13 crc kubenswrapper[4727]: I1121 20:12:13.359871 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-84c4f65785-dz4nz" Nov 21 20:12:34 crc kubenswrapper[4727]: I1121 20:12:34.200314 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:12:34 crc kubenswrapper[4727]: I1121 20:12:34.233869 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:12:35 crc kubenswrapper[4727]: I1121 20:12:35.090072 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Nov 21 20:12:43 crc kubenswrapper[4727]: I1121 20:12:43.335572 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:12:43 crc kubenswrapper[4727]: I1121 20:12:43.336106 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.335815 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.336486 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.801218 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5bfbd5755b-phmrq"] Nov 21 20:13:13 crc kubenswrapper[4727]: E1121 20:13:13.801867 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dacc1ea4-7062-46ad-a784-70537e92dc51" containerName="console" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.801891 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="dacc1ea4-7062-46ad-a784-70537e92dc51" containerName="console" Nov 21 20:13:13 crc kubenswrapper[4727]: E1121 20:13:13.801925 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c9326c-5a9b-4e19-a0a7-047289e45c01" containerName="registry" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.801935 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c9326c-5a9b-4e19-a0a7-047289e45c01" containerName="registry" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.802112 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="dacc1ea4-7062-46ad-a784-70537e92dc51" containerName="console" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.802129 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="36c9326c-5a9b-4e19-a0a7-047289e45c01" containerName="registry" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.802675 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.842864 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-trusted-ca-bundle\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.842935 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-service-ca\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.843156 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpjdk\" (UniqueName: \"kubernetes.io/projected/3606aeab-9115-4b44-9cfb-c9d7ac34e711-kube-api-access-bpjdk\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.843218 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-oauth-config\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.843444 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-serving-cert\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.843486 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-config\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.843532 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-oauth-serving-cert\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.862706 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bfbd5755b-phmrq"] Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.944253 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-serving-cert\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.944296 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-config\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.944327 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-oauth-serving-cert\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.944353 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-trusted-ca-bundle\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.944395 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-service-ca\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.944417 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpjdk\" (UniqueName: \"kubernetes.io/projected/3606aeab-9115-4b44-9cfb-c9d7ac34e711-kube-api-access-bpjdk\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.944437 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-oauth-config\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.945616 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-trusted-ca-bundle\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.945649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-config\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.945895 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-service-ca\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.946048 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-oauth-serving-cert\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.951462 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-serving-cert\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.955860 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-oauth-config\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:13 crc kubenswrapper[4727]: I1121 20:13:13.959168 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpjdk\" (UniqueName: \"kubernetes.io/projected/3606aeab-9115-4b44-9cfb-c9d7ac34e711-kube-api-access-bpjdk\") pod \"console-5bfbd5755b-phmrq\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:14 crc kubenswrapper[4727]: I1121 20:13:14.132627 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:14 crc kubenswrapper[4727]: I1121 20:13:14.338246 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bfbd5755b-phmrq"] Nov 21 20:13:14 crc kubenswrapper[4727]: W1121 20:13:14.341881 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3606aeab_9115_4b44_9cfb_c9d7ac34e711.slice/crio-88165e32d6a70810d696b5f374d5e27d55950d16fb05f260c85ea43ea9fa086f WatchSource:0}: Error finding container 88165e32d6a70810d696b5f374d5e27d55950d16fb05f260c85ea43ea9fa086f: Status 404 returned error can't find the container with id 88165e32d6a70810d696b5f374d5e27d55950d16fb05f260c85ea43ea9fa086f Nov 21 20:13:15 crc kubenswrapper[4727]: I1121 20:13:15.311110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bfbd5755b-phmrq" event={"ID":"3606aeab-9115-4b44-9cfb-c9d7ac34e711","Type":"ContainerStarted","Data":"57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582"} Nov 21 20:13:15 crc kubenswrapper[4727]: I1121 20:13:15.311399 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bfbd5755b-phmrq" event={"ID":"3606aeab-9115-4b44-9cfb-c9d7ac34e711","Type":"ContainerStarted","Data":"88165e32d6a70810d696b5f374d5e27d55950d16fb05f260c85ea43ea9fa086f"} Nov 21 20:13:15 crc kubenswrapper[4727]: I1121 20:13:15.336324 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5bfbd5755b-phmrq" podStartSLOduration=2.336309139 podStartE2EDuration="2.336309139s" podCreationTimestamp="2025-11-21 20:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:13:15.334575901 +0000 UTC m=+400.520760945" watchObservedRunningTime="2025-11-21 20:13:15.336309139 +0000 UTC m=+400.522494183" Nov 21 20:13:24 crc kubenswrapper[4727]: I1121 20:13:24.133313 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:24 crc kubenswrapper[4727]: I1121 20:13:24.133808 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:24 crc kubenswrapper[4727]: I1121 20:13:24.137617 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:24 crc kubenswrapper[4727]: I1121 20:13:24.361516 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:13:24 crc kubenswrapper[4727]: I1121 20:13:24.408440 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-78d55d4968-j8p59"] Nov 21 20:13:43 crc kubenswrapper[4727]: I1121 20:13:43.336039 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:13:43 crc kubenswrapper[4727]: I1121 20:13:43.336599 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:13:43 crc kubenswrapper[4727]: I1121 20:13:43.336652 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:13:43 crc kubenswrapper[4727]: I1121 20:13:43.337342 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b64e1aad32276f0362b5ee96e2453c18f2a0e0c6f7ada508ab81eab2ebb4231"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:13:43 crc kubenswrapper[4727]: I1121 20:13:43.337452 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://7b64e1aad32276f0362b5ee96e2453c18f2a0e0c6f7ada508ab81eab2ebb4231" gracePeriod=600 Nov 21 20:13:43 crc kubenswrapper[4727]: I1121 20:13:43.473103 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="7b64e1aad32276f0362b5ee96e2453c18f2a0e0c6f7ada508ab81eab2ebb4231" exitCode=0 Nov 21 20:13:43 crc kubenswrapper[4727]: I1121 20:13:43.473144 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"7b64e1aad32276f0362b5ee96e2453c18f2a0e0c6f7ada508ab81eab2ebb4231"} Nov 21 20:13:43 crc kubenswrapper[4727]: I1121 20:13:43.473221 4727 scope.go:117] "RemoveContainer" containerID="443c4ad2ecea2e9d360b9300cb0c384fb5afc27e6ac44394f985adbeab354a9c" Nov 21 20:13:44 crc kubenswrapper[4727]: I1121 20:13:44.479579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"31ac78ad9674edf30089e0f6eccedebce8b14f0ff1682633a29c88cf87b54891"} Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.443283 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-78d55d4968-j8p59" podUID="bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" containerName="console" containerID="cri-o://1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8" gracePeriod=15 Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.781300 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-78d55d4968-j8p59_bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c/console/0.log" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.781577 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.818394 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-serving-cert\") pod \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.818641 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-service-ca\") pod \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.818676 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-oauth-serving-cert\") pod \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.818723 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-config\") pod \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.818799 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-trusted-ca-bundle\") pod \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.819078 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-oauth-config\") pod \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.819116 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrhw4\" (UniqueName: \"kubernetes.io/projected/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-kube-api-access-qrhw4\") pod \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\" (UID: \"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c\") " Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.819534 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-service-ca" (OuterVolumeSpecName: "service-ca") pod "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" (UID: "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.819552 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" (UID: "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.819579 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" (UID: "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.819769 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-config" (OuterVolumeSpecName: "console-config") pod "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" (UID: "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.824737 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" (UID: "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.824741 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-kube-api-access-qrhw4" (OuterVolumeSpecName: "kube-api-access-qrhw4") pod "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" (UID: "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c"). InnerVolumeSpecName "kube-api-access-qrhw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.828073 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" (UID: "bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.920341 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrhw4\" (UniqueName: \"kubernetes.io/projected/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-kube-api-access-qrhw4\") on node \"crc\" DevicePath \"\"" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.920376 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.920385 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.920396 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.920404 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.920412 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:13:49 crc kubenswrapper[4727]: I1121 20:13:49.920419 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:13:50 crc kubenswrapper[4727]: I1121 20:13:50.518218 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-78d55d4968-j8p59_bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c/console/0.log" Nov 21 20:13:50 crc kubenswrapper[4727]: I1121 20:13:50.518272 4727 generic.go:334] "Generic (PLEG): container finished" podID="bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" containerID="1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8" exitCode=2 Nov 21 20:13:50 crc kubenswrapper[4727]: I1121 20:13:50.518307 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78d55d4968-j8p59" event={"ID":"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c","Type":"ContainerDied","Data":"1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8"} Nov 21 20:13:50 crc kubenswrapper[4727]: I1121 20:13:50.518335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78d55d4968-j8p59" event={"ID":"bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c","Type":"ContainerDied","Data":"c069c327dee45128dfae42b653c8150260b06636ad5678312977e602fe03bef5"} Nov 21 20:13:50 crc kubenswrapper[4727]: I1121 20:13:50.518352 4727 scope.go:117] "RemoveContainer" containerID="1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8" Nov 21 20:13:50 crc kubenswrapper[4727]: I1121 20:13:50.518475 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78d55d4968-j8p59" Nov 21 20:13:50 crc kubenswrapper[4727]: I1121 20:13:50.565243 4727 scope.go:117] "RemoveContainer" containerID="1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8" Nov 21 20:13:50 crc kubenswrapper[4727]: E1121 20:13:50.565633 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8\": container with ID starting with 1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8 not found: ID does not exist" containerID="1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8" Nov 21 20:13:50 crc kubenswrapper[4727]: I1121 20:13:50.565672 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8"} err="failed to get container status \"1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8\": rpc error: code = NotFound desc = could not find container \"1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8\": container with ID starting with 1fa60f9f927940d5dbb6d9d50e04ee19779bd15e479eb9c0ad858c59183afef8 not found: ID does not exist" Nov 21 20:13:50 crc kubenswrapper[4727]: I1121 20:13:50.572206 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-78d55d4968-j8p59"] Nov 21 20:13:50 crc kubenswrapper[4727]: I1121 20:13:50.578903 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-78d55d4968-j8p59"] Nov 21 20:13:51 crc kubenswrapper[4727]: I1121 20:13:51.507090 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" path="/var/lib/kubelet/pods/bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c/volumes" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.143895 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp"] Nov 21 20:15:00 crc kubenswrapper[4727]: E1121 20:15:00.144810 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" containerName="console" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.144822 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" containerName="console" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.144940 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcbd2628-65d1-48a7-a6d8-60b4b89c8e8c" containerName="console" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.145343 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.147800 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.147833 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.160660 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/455875e8-9e5e-4129-b084-a4f48b8def31-config-volume\") pod \"collect-profiles-29395935-8m9mp\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.160807 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnxq\" (UniqueName: \"kubernetes.io/projected/455875e8-9e5e-4129-b084-a4f48b8def31-kube-api-access-2cnxq\") pod \"collect-profiles-29395935-8m9mp\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.160859 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/455875e8-9e5e-4129-b084-a4f48b8def31-secret-volume\") pod \"collect-profiles-29395935-8m9mp\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.168512 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp"] Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.262194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnxq\" (UniqueName: \"kubernetes.io/projected/455875e8-9e5e-4129-b084-a4f48b8def31-kube-api-access-2cnxq\") pod \"collect-profiles-29395935-8m9mp\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.262573 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/455875e8-9e5e-4129-b084-a4f48b8def31-secret-volume\") pod \"collect-profiles-29395935-8m9mp\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.262613 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/455875e8-9e5e-4129-b084-a4f48b8def31-config-volume\") pod \"collect-profiles-29395935-8m9mp\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.263581 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/455875e8-9e5e-4129-b084-a4f48b8def31-config-volume\") pod \"collect-profiles-29395935-8m9mp\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.277809 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/455875e8-9e5e-4129-b084-a4f48b8def31-secret-volume\") pod \"collect-profiles-29395935-8m9mp\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.279324 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnxq\" (UniqueName: \"kubernetes.io/projected/455875e8-9e5e-4129-b084-a4f48b8def31-kube-api-access-2cnxq\") pod \"collect-profiles-29395935-8m9mp\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.471769 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.707779 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp"] Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.961627 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" event={"ID":"455875e8-9e5e-4129-b084-a4f48b8def31","Type":"ContainerStarted","Data":"69f571e50f603390842540bc0b42f1436667183bdf982d2d4f86c787b0fe2bc0"} Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.961676 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" event={"ID":"455875e8-9e5e-4129-b084-a4f48b8def31","Type":"ContainerStarted","Data":"2ba72455950b712fb4579a4e1c00dba3d926c94dbe5e13ec2417f9e952916d8c"} Nov 21 20:15:00 crc kubenswrapper[4727]: I1121 20:15:00.984402 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" podStartSLOduration=0.984372019 podStartE2EDuration="984.372019ms" podCreationTimestamp="2025-11-21 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:15:00.976592841 +0000 UTC m=+506.162777935" watchObservedRunningTime="2025-11-21 20:15:00.984372019 +0000 UTC m=+506.170557103" Nov 21 20:15:01 crc kubenswrapper[4727]: I1121 20:15:01.968827 4727 generic.go:334] "Generic (PLEG): container finished" podID="455875e8-9e5e-4129-b084-a4f48b8def31" containerID="69f571e50f603390842540bc0b42f1436667183bdf982d2d4f86c787b0fe2bc0" exitCode=0 Nov 21 20:15:01 crc kubenswrapper[4727]: I1121 20:15:01.968889 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" event={"ID":"455875e8-9e5e-4129-b084-a4f48b8def31","Type":"ContainerDied","Data":"69f571e50f603390842540bc0b42f1436667183bdf982d2d4f86c787b0fe2bc0"} Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.250832 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.307098 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cnxq\" (UniqueName: \"kubernetes.io/projected/455875e8-9e5e-4129-b084-a4f48b8def31-kube-api-access-2cnxq\") pod \"455875e8-9e5e-4129-b084-a4f48b8def31\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.307159 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/455875e8-9e5e-4129-b084-a4f48b8def31-config-volume\") pod \"455875e8-9e5e-4129-b084-a4f48b8def31\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.307222 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/455875e8-9e5e-4129-b084-a4f48b8def31-secret-volume\") pod \"455875e8-9e5e-4129-b084-a4f48b8def31\" (UID: \"455875e8-9e5e-4129-b084-a4f48b8def31\") " Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.308257 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/455875e8-9e5e-4129-b084-a4f48b8def31-config-volume" (OuterVolumeSpecName: "config-volume") pod "455875e8-9e5e-4129-b084-a4f48b8def31" (UID: "455875e8-9e5e-4129-b084-a4f48b8def31"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.314106 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455875e8-9e5e-4129-b084-a4f48b8def31-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "455875e8-9e5e-4129-b084-a4f48b8def31" (UID: "455875e8-9e5e-4129-b084-a4f48b8def31"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.314307 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455875e8-9e5e-4129-b084-a4f48b8def31-kube-api-access-2cnxq" (OuterVolumeSpecName: "kube-api-access-2cnxq") pod "455875e8-9e5e-4129-b084-a4f48b8def31" (UID: "455875e8-9e5e-4129-b084-a4f48b8def31"). InnerVolumeSpecName "kube-api-access-2cnxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.408810 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cnxq\" (UniqueName: \"kubernetes.io/projected/455875e8-9e5e-4129-b084-a4f48b8def31-kube-api-access-2cnxq\") on node \"crc\" DevicePath \"\"" Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.408858 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/455875e8-9e5e-4129-b084-a4f48b8def31-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.408871 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/455875e8-9e5e-4129-b084-a4f48b8def31-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.981272 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" event={"ID":"455875e8-9e5e-4129-b084-a4f48b8def31","Type":"ContainerDied","Data":"2ba72455950b712fb4579a4e1c00dba3d926c94dbe5e13ec2417f9e952916d8c"} Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.981343 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ba72455950b712fb4579a4e1c00dba3d926c94dbe5e13ec2417f9e952916d8c" Nov 21 20:15:03 crc kubenswrapper[4727]: I1121 20:15:03.981316 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp" Nov 21 20:15:43 crc kubenswrapper[4727]: I1121 20:15:43.334888 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:15:43 crc kubenswrapper[4727]: I1121 20:15:43.335477 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:15:58 crc kubenswrapper[4727]: I1121 20:15:58.908323 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh"] Nov 21 20:15:58 crc kubenswrapper[4727]: E1121 20:15:58.909120 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455875e8-9e5e-4129-b084-a4f48b8def31" containerName="collect-profiles" Nov 21 20:15:58 crc kubenswrapper[4727]: I1121 20:15:58.909136 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="455875e8-9e5e-4129-b084-a4f48b8def31" containerName="collect-profiles" Nov 21 20:15:58 crc kubenswrapper[4727]: I1121 20:15:58.909295 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="455875e8-9e5e-4129-b084-a4f48b8def31" containerName="collect-profiles" Nov 21 20:15:58 crc kubenswrapper[4727]: I1121 20:15:58.910346 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:58 crc kubenswrapper[4727]: I1121 20:15:58.912666 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 20:15:58 crc kubenswrapper[4727]: I1121 20:15:58.918656 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh"] Nov 21 20:15:58 crc kubenswrapper[4727]: I1121 20:15:58.954510 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:58 crc kubenswrapper[4727]: I1121 20:15:58.954617 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ds9\" (UniqueName: \"kubernetes.io/projected/8ab1bf80-b740-45ee-b703-4190578fcf3e-kube-api-access-77ds9\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:58 crc kubenswrapper[4727]: I1121 20:15:58.954810 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:59 crc kubenswrapper[4727]: I1121 20:15:59.055729 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:59 crc kubenswrapper[4727]: I1121 20:15:59.055775 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:59 crc kubenswrapper[4727]: I1121 20:15:59.055832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77ds9\" (UniqueName: \"kubernetes.io/projected/8ab1bf80-b740-45ee-b703-4190578fcf3e-kube-api-access-77ds9\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:59 crc kubenswrapper[4727]: I1121 20:15:59.056193 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:59 crc kubenswrapper[4727]: I1121 20:15:59.056220 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:59 crc kubenswrapper[4727]: I1121 20:15:59.075324 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77ds9\" (UniqueName: \"kubernetes.io/projected/8ab1bf80-b740-45ee-b703-4190578fcf3e-kube-api-access-77ds9\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:59 crc kubenswrapper[4727]: I1121 20:15:59.228192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:15:59 crc kubenswrapper[4727]: I1121 20:15:59.464341 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh"] Nov 21 20:16:00 crc kubenswrapper[4727]: I1121 20:16:00.372540 4727 generic.go:334] "Generic (PLEG): container finished" podID="8ab1bf80-b740-45ee-b703-4190578fcf3e" containerID="d837014188d88542fdef84c22701b1949daee257953bb1dc35ca8f0b2b91cd31" exitCode=0 Nov 21 20:16:00 crc kubenswrapper[4727]: I1121 20:16:00.372586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" event={"ID":"8ab1bf80-b740-45ee-b703-4190578fcf3e","Type":"ContainerDied","Data":"d837014188d88542fdef84c22701b1949daee257953bb1dc35ca8f0b2b91cd31"} Nov 21 20:16:00 crc kubenswrapper[4727]: I1121 20:16:00.372874 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" event={"ID":"8ab1bf80-b740-45ee-b703-4190578fcf3e","Type":"ContainerStarted","Data":"aaf45d43e24835a62b0c43112ef55b8d5cc2ddda4845ac9924e60c53e52f7773"} Nov 21 20:16:00 crc kubenswrapper[4727]: I1121 20:16:00.374539 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 20:16:02 crc kubenswrapper[4727]: I1121 20:16:02.385741 4727 generic.go:334] "Generic (PLEG): container finished" podID="8ab1bf80-b740-45ee-b703-4190578fcf3e" containerID="2621b4a9f65a360b8e6824f69d690d59218c796f45a40c6413d74803b1c67ecf" exitCode=0 Nov 21 20:16:02 crc kubenswrapper[4727]: I1121 20:16:02.385778 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" event={"ID":"8ab1bf80-b740-45ee-b703-4190578fcf3e","Type":"ContainerDied","Data":"2621b4a9f65a360b8e6824f69d690d59218c796f45a40c6413d74803b1c67ecf"} Nov 21 20:16:03 crc kubenswrapper[4727]: I1121 20:16:03.394045 4727 generic.go:334] "Generic (PLEG): container finished" podID="8ab1bf80-b740-45ee-b703-4190578fcf3e" containerID="d10cbdb012bf514ab6a0784998a01a4eb9971a5f9a238bc939964054e4e51a45" exitCode=0 Nov 21 20:16:03 crc kubenswrapper[4727]: I1121 20:16:03.394159 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" event={"ID":"8ab1bf80-b740-45ee-b703-4190578fcf3e","Type":"ContainerDied","Data":"d10cbdb012bf514ab6a0784998a01a4eb9971a5f9a238bc939964054e4e51a45"} Nov 21 20:16:04 crc kubenswrapper[4727]: I1121 20:16:04.659130 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:16:04 crc kubenswrapper[4727]: I1121 20:16:04.836426 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-bundle\") pod \"8ab1bf80-b740-45ee-b703-4190578fcf3e\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " Nov 21 20:16:04 crc kubenswrapper[4727]: I1121 20:16:04.836811 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-util\") pod \"8ab1bf80-b740-45ee-b703-4190578fcf3e\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " Nov 21 20:16:04 crc kubenswrapper[4727]: I1121 20:16:04.836929 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77ds9\" (UniqueName: \"kubernetes.io/projected/8ab1bf80-b740-45ee-b703-4190578fcf3e-kube-api-access-77ds9\") pod \"8ab1bf80-b740-45ee-b703-4190578fcf3e\" (UID: \"8ab1bf80-b740-45ee-b703-4190578fcf3e\") " Nov 21 20:16:04 crc kubenswrapper[4727]: I1121 20:16:04.838746 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-bundle" (OuterVolumeSpecName: "bundle") pod "8ab1bf80-b740-45ee-b703-4190578fcf3e" (UID: "8ab1bf80-b740-45ee-b703-4190578fcf3e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:16:04 crc kubenswrapper[4727]: I1121 20:16:04.845260 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab1bf80-b740-45ee-b703-4190578fcf3e-kube-api-access-77ds9" (OuterVolumeSpecName: "kube-api-access-77ds9") pod "8ab1bf80-b740-45ee-b703-4190578fcf3e" (UID: "8ab1bf80-b740-45ee-b703-4190578fcf3e"). InnerVolumeSpecName "kube-api-access-77ds9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:16:04 crc kubenswrapper[4727]: I1121 20:16:04.851256 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-util" (OuterVolumeSpecName: "util") pod "8ab1bf80-b740-45ee-b703-4190578fcf3e" (UID: "8ab1bf80-b740-45ee-b703-4190578fcf3e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:16:04 crc kubenswrapper[4727]: I1121 20:16:04.939029 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-util\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:04 crc kubenswrapper[4727]: I1121 20:16:04.939442 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77ds9\" (UniqueName: \"kubernetes.io/projected/8ab1bf80-b740-45ee-b703-4190578fcf3e-kube-api-access-77ds9\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:04 crc kubenswrapper[4727]: I1121 20:16:04.939593 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8ab1bf80-b740-45ee-b703-4190578fcf3e-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:05 crc kubenswrapper[4727]: I1121 20:16:05.405423 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" event={"ID":"8ab1bf80-b740-45ee-b703-4190578fcf3e","Type":"ContainerDied","Data":"aaf45d43e24835a62b0c43112ef55b8d5cc2ddda4845ac9924e60c53e52f7773"} Nov 21 20:16:05 crc kubenswrapper[4727]: I1121 20:16:05.405677 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf45d43e24835a62b0c43112ef55b8d5cc2ddda4845ac9924e60c53e52f7773" Nov 21 20:16:05 crc kubenswrapper[4727]: I1121 20:16:05.405471 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh" Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.319504 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tfd4j"] Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.320483 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovn-controller" containerID="cri-o://be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65" gracePeriod=30 Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.320532 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="sbdb" containerID="cri-o://68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47" gracePeriod=30 Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.320573 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="nbdb" containerID="cri-o://f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01" gracePeriod=30 Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.320613 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovn-acl-logging" containerID="cri-o://cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f" gracePeriod=30 Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.320666 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3" gracePeriod=30 Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.320539 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="northd" containerID="cri-o://08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce" gracePeriod=30 Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.320603 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="kube-rbac-proxy-node" containerID="cri-o://8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b" gracePeriod=30 Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.359519 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" containerID="cri-o://debe3de5c2229e322e3a60e6669b1769dcb37d47d93bdd8da29c3c465418f88c" gracePeriod=30 Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.447248 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7rvdc_07dba644-eb6f-45c3-b373-7a1610c569aa/kube-multus/2.log" Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.448033 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7rvdc_07dba644-eb6f-45c3-b373-7a1610c569aa/kube-multus/1.log" Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.448095 4727 generic.go:334] "Generic (PLEG): container finished" podID="07dba644-eb6f-45c3-b373-7a1610c569aa" containerID="c0d56f4e8a0bf7ace78cc9404a73f24132ec2d0b20654b1aa5db4cab0db74936" exitCode=2 Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.448172 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7rvdc" event={"ID":"07dba644-eb6f-45c3-b373-7a1610c569aa","Type":"ContainerDied","Data":"c0d56f4e8a0bf7ace78cc9404a73f24132ec2d0b20654b1aa5db4cab0db74936"} Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.448243 4727 scope.go:117] "RemoveContainer" containerID="aec58b6ba56dadda6cbc97446bc117681e597f80615120f259760193be269bfb" Nov 21 20:16:10 crc kubenswrapper[4727]: I1121 20:16:10.448753 4727 scope.go:117] "RemoveContainer" containerID="c0d56f4e8a0bf7ace78cc9404a73f24132ec2d0b20654b1aa5db4cab0db74936" Nov 21 20:16:10 crc kubenswrapper[4727]: E1121 20:16:10.450148 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7rvdc_openshift-multus(07dba644-eb6f-45c3-b373-7a1610c569aa)\"" pod="openshift-multus/multus-7rvdc" podUID="07dba644-eb6f-45c3-b373-7a1610c569aa" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.463575 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovnkube-controller/3.log" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.469828 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovn-acl-logging/0.log" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.470415 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovn-controller/0.log" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.470900 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="debe3de5c2229e322e3a60e6669b1769dcb37d47d93bdd8da29c3c465418f88c" exitCode=0 Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.470975 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47" exitCode=0 Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.470986 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01" exitCode=0 Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.470997 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce" exitCode=0 Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471007 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3" exitCode=0 Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.470998 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"debe3de5c2229e322e3a60e6669b1769dcb37d47d93bdd8da29c3c465418f88c"} Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471062 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47"} Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471077 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01"} Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471088 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce"} Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471098 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3"} Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471108 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b"} Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471016 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b" exitCode=0 Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471127 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f" exitCode=143 Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471141 4727 generic.go:334] "Generic (PLEG): container finished" podID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerID="be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65" exitCode=143 Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471166 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f"} Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471130 4727 scope.go:117] "RemoveContainer" containerID="1aea392a365aa66ac35b5cd36cd1aad398890771de41b8dc248e2f98a171f4ca" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.471219 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65"} Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.474891 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7rvdc_07dba644-eb6f-45c3-b373-7a1610c569aa/kube-multus/2.log" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.546287 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovn-acl-logging/0.log" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.546987 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovn-controller/0.log" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.547337 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.611676 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k9qh6"] Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.611946 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="northd" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.611982 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="northd" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.611996 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612006 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612016 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="kubecfg-setup" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612026 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="kubecfg-setup" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612034 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612041 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612055 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovn-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612061 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovn-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612070 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612077 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612088 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovn-acl-logging" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612095 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovn-acl-logging" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612106 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="kube-rbac-proxy-node" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612125 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="kube-rbac-proxy-node" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612134 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612142 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612155 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="sbdb" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612162 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="sbdb" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612174 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab1bf80-b740-45ee-b703-4190578fcf3e" containerName="pull" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612180 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab1bf80-b740-45ee-b703-4190578fcf3e" containerName="pull" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612189 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612196 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612205 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab1bf80-b740-45ee-b703-4190578fcf3e" containerName="extract" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612212 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab1bf80-b740-45ee-b703-4190578fcf3e" containerName="extract" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612224 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="nbdb" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612261 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="nbdb" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612272 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab1bf80-b740-45ee-b703-4190578fcf3e" containerName="util" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612280 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab1bf80-b740-45ee-b703-4190578fcf3e" containerName="util" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612413 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612423 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="kube-rbac-proxy-node" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612437 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovn-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612446 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovn-acl-logging" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612457 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612466 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab1bf80-b740-45ee-b703-4190578fcf3e" containerName="extract" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612473 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="nbdb" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612484 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612493 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612501 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="sbdb" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612511 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612520 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="northd" Nov 21 20:16:11 crc kubenswrapper[4727]: E1121 20:16:11.612658 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612668 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.612794 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" containerName="ovnkube-controller" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.615061 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642004 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-var-lib-openvswitch\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642057 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-log-socket\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642120 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-systemd\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642142 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-ovn\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642157 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-netns\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642176 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-bin\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642201 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-kubelet\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642214 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-etc-openvswitch\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642228 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-node-log\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642243 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-openvswitch\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642266 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx4mt\" (UniqueName: \"kubernetes.io/projected/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-kube-api-access-rx4mt\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642290 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-netd\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642314 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-systemd-units\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642330 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-var-lib-cni-networks-ovn-kubernetes\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642348 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-env-overrides\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642385 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-slash\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642409 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-config\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642430 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-ovn-kubernetes\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642448 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovn-node-metrics-cert\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642471 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-script-lib\") pod \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\" (UID: \"70d2ca13-a8f7-43dc-8ad0-142d99ccde18\") " Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642641 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-run-openvswitch\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642666 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30dcc4a5-e602-4b92-be89-b83fc65e94b0-ovnkube-script-lib\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642683 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-log-socket\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642702 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-systemd-units\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642721 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-cni-bin\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642747 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-run-ovn\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642760 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgngh\" (UniqueName: \"kubernetes.io/projected/30dcc4a5-e602-4b92-be89-b83fc65e94b0-kube-api-access-zgngh\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642776 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-run-ovn-kubernetes\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642797 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-etc-openvswitch\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642815 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-run-systemd\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642848 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-node-log\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642868 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-var-lib-openvswitch\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642889 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-kubelet\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642916 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30dcc4a5-e602-4b92-be89-b83fc65e94b0-ovn-node-metrics-cert\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642941 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.642963 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-cni-netd\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643000 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30dcc4a5-e602-4b92-be89-b83fc65e94b0-env-overrides\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643023 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30dcc4a5-e602-4b92-be89-b83fc65e94b0-ovnkube-config\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643044 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-run-netns\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643061 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-slash\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643180 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643209 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-log-socket" (OuterVolumeSpecName: "log-socket") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643699 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643776 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643808 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643829 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643840 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-node-log" (OuterVolumeSpecName: "node-log") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643857 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643884 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643904 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643927 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-slash" (OuterVolumeSpecName: "host-slash") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643913 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643947 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.643922 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.644210 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.644294 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.644647 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.661710 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.662193 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-kube-api-access-rx4mt" (OuterVolumeSpecName: "kube-api-access-rx4mt") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "kube-api-access-rx4mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.666379 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "70d2ca13-a8f7-43dc-8ad0-142d99ccde18" (UID: "70d2ca13-a8f7-43dc-8ad0-142d99ccde18"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744233 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-run-ovn\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744286 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgngh\" (UniqueName: \"kubernetes.io/projected/30dcc4a5-e602-4b92-be89-b83fc65e94b0-kube-api-access-zgngh\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-run-ovn-kubernetes\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744419 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-etc-openvswitch\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744439 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-run-systemd\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744466 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-node-log\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744499 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-run-ovn-kubernetes\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744515 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-var-lib-openvswitch\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744533 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-kubelet\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744553 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30dcc4a5-e602-4b92-be89-b83fc65e94b0-ovn-node-metrics-cert\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744559 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-node-log\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744574 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744590 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-cni-netd\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744590 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-kubelet\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744603 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30dcc4a5-e602-4b92-be89-b83fc65e94b0-env-overrides\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744595 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-etc-openvswitch\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744609 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-run-systemd\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744656 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-cni-netd\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744623 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-var-lib-openvswitch\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744623 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30dcc4a5-e602-4b92-be89-b83fc65e94b0-ovnkube-config\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744785 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-run-netns\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744823 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-slash\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744913 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-run-openvswitch\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30dcc4a5-e602-4b92-be89-b83fc65e94b0-ovnkube-script-lib\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.744984 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-log-socket\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745033 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-systemd-units\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745088 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-cni-bin\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745206 4727 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745224 4727 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745236 4727 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745248 4727 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745262 4727 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745273 4727 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-node-log\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745285 4727 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745296 4727 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745309 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx4mt\" (UniqueName: \"kubernetes.io/projected/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-kube-api-access-rx4mt\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745320 4727 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745332 4727 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745338 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/30dcc4a5-e602-4b92-be89-b83fc65e94b0-env-overrides\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745344 4727 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745358 4727 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745373 4727 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-slash\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745377 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-run-openvswitch\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745386 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745399 4727 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745414 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745420 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/30dcc4a5-e602-4b92-be89-b83fc65e94b0-ovnkube-config\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745428 4727 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745422 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-slash\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745441 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-log-socket\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745485 4727 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745499 4727 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/70d2ca13-a8f7-43dc-8ad0-142d99ccde18-log-socket\") on node \"crc\" DevicePath \"\"" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-run-netns\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745517 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-host-cni-bin\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745552 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-systemd-units\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745679 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/30dcc4a5-e602-4b92-be89-b83fc65e94b0-run-ovn\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.745920 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/30dcc4a5-e602-4b92-be89-b83fc65e94b0-ovnkube-script-lib\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.750565 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/30dcc4a5-e602-4b92-be89-b83fc65e94b0-ovn-node-metrics-cert\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.782033 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgngh\" (UniqueName: \"kubernetes.io/projected/30dcc4a5-e602-4b92-be89-b83fc65e94b0-kube-api-access-zgngh\") pod \"ovnkube-node-k9qh6\" (UID: \"30dcc4a5-e602-4b92-be89-b83fc65e94b0\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:11 crc kubenswrapper[4727]: I1121 20:16:11.930392 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.480636 4727 generic.go:334] "Generic (PLEG): container finished" podID="30dcc4a5-e602-4b92-be89-b83fc65e94b0" containerID="9afd916f9473d0f6527feb014cfdbc01af6bcbb02c87a0bc355c1a84f3672857" exitCode=0 Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.480700 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" event={"ID":"30dcc4a5-e602-4b92-be89-b83fc65e94b0","Type":"ContainerDied","Data":"9afd916f9473d0f6527feb014cfdbc01af6bcbb02c87a0bc355c1a84f3672857"} Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.480729 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" event={"ID":"30dcc4a5-e602-4b92-be89-b83fc65e94b0","Type":"ContainerStarted","Data":"847101a8251a0d18bf0190ebcfacc96cd323ddf4e92aef14a8ee993bcb7e0e12"} Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.486574 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovn-acl-logging/0.log" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.487091 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tfd4j_70d2ca13-a8f7-43dc-8ad0-142d99ccde18/ovn-controller/0.log" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.487424 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" event={"ID":"70d2ca13-a8f7-43dc-8ad0-142d99ccde18","Type":"ContainerDied","Data":"427acc50520025071d33ce155ca67808c9d7cf83b3761db2a5f2fc6e0bb22af9"} Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.487470 4727 scope.go:117] "RemoveContainer" containerID="debe3de5c2229e322e3a60e6669b1769dcb37d47d93bdd8da29c3c465418f88c" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.487531 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tfd4j" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.530362 4727 scope.go:117] "RemoveContainer" containerID="68979c9b5ab69852e6cc4293ef0e77454dcbb7fdaeb61b7b6af5c1a3ac101e47" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.570327 4727 scope.go:117] "RemoveContainer" containerID="f04b4d4857a8309f30f22774c8a2faeb752998b094d70898a986e04038771b01" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.599843 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tfd4j"] Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.605259 4727 scope.go:117] "RemoveContainer" containerID="08f02a4cae48dc4ac9d7da3d367fe5137fb012ee8a28f45c9cf4af098f1d93ce" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.609402 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tfd4j"] Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.626466 4727 scope.go:117] "RemoveContainer" containerID="060f5b533d83fc7cf02d24bf7b12e2a4972ce1d7ed4e62c4c7788467ae2229c3" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.645935 4727 scope.go:117] "RemoveContainer" containerID="8fae7d81bbd4cc7c05885d66b30b4a4f5422c74aecc5439700c1fe7eb3c28a4b" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.690316 4727 scope.go:117] "RemoveContainer" containerID="cf216dac56f09909db88c9ea0ef9d0d4fb8bc2e63a02fd5631c7d4369080f88f" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.716603 4727 scope.go:117] "RemoveContainer" containerID="be1909b9178d7fb508200d34ad33f7d2c2aa2ae9756b15c17fd7c49bb019cc65" Nov 21 20:16:12 crc kubenswrapper[4727]: I1121 20:16:12.748606 4727 scope.go:117] "RemoveContainer" containerID="f5fa8be5e6ea2c20dd597fb7ef2fdd3355045b310d1a99bb1a89a8207e505577" Nov 21 20:16:13 crc kubenswrapper[4727]: I1121 20:16:13.335110 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:16:13 crc kubenswrapper[4727]: I1121 20:16:13.335174 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:16:13 crc kubenswrapper[4727]: I1121 20:16:13.504602 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d2ca13-a8f7-43dc-8ad0-142d99ccde18" path="/var/lib/kubelet/pods/70d2ca13-a8f7-43dc-8ad0-142d99ccde18/volumes" Nov 21 20:16:13 crc kubenswrapper[4727]: I1121 20:16:13.505674 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" event={"ID":"30dcc4a5-e602-4b92-be89-b83fc65e94b0","Type":"ContainerStarted","Data":"dde9c89e620432537e0355a655aa8618848644157f566394712c78c66f8b1045"} Nov 21 20:16:13 crc kubenswrapper[4727]: I1121 20:16:13.505704 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" event={"ID":"30dcc4a5-e602-4b92-be89-b83fc65e94b0","Type":"ContainerStarted","Data":"e801dac225e2f834d7347ca501191da3781c7aa2581b1ef0930a1fb540ab5630"} Nov 21 20:16:13 crc kubenswrapper[4727]: I1121 20:16:13.505715 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" event={"ID":"30dcc4a5-e602-4b92-be89-b83fc65e94b0","Type":"ContainerStarted","Data":"8cb979450ca040a32b67eb0d87ca9b72e0c8dce764ef7c6fb25c08d01a05fb03"} Nov 21 20:16:13 crc kubenswrapper[4727]: I1121 20:16:13.505724 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" event={"ID":"30dcc4a5-e602-4b92-be89-b83fc65e94b0","Type":"ContainerStarted","Data":"f2ef125fce8b7f7ea0cee7310b4facb9dc9d17b67ce60d420bf8a32c84c4d7be"} Nov 21 20:16:13 crc kubenswrapper[4727]: I1121 20:16:13.505733 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" event={"ID":"30dcc4a5-e602-4b92-be89-b83fc65e94b0","Type":"ContainerStarted","Data":"da9d8cd0212403bad417370a597bb2fc4ba5b6cfd0914395264929434997211f"} Nov 21 20:16:13 crc kubenswrapper[4727]: I1121 20:16:13.505742 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" event={"ID":"30dcc4a5-e602-4b92-be89-b83fc65e94b0","Type":"ContainerStarted","Data":"e5e3f822d597e9502321ef899b74e41696e63990544485d7b2e305e29d9764dd"} Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.466680 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2"] Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.467691 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.471004 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.471038 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-hl22r" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.471138 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.507206 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lgwx\" (UniqueName: \"kubernetes.io/projected/b7c4b477-dfcc-4cf5-ac76-eef0917d866c-kube-api-access-2lgwx\") pod \"obo-prometheus-operator-668cf9dfbb-rxhl2\" (UID: \"b7c4b477-dfcc-4cf5-ac76-eef0917d866c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.515800 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" event={"ID":"30dcc4a5-e602-4b92-be89-b83fc65e94b0","Type":"ContainerStarted","Data":"7959d40126067887096bda18ab26270c9f530d47f78eea983242199f91245f1c"} Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.601700 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8"] Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.602458 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.604365 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.604705 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-kp2l5" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.607994 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lgwx\" (UniqueName: \"kubernetes.io/projected/b7c4b477-dfcc-4cf5-ac76-eef0917d866c-kube-api-access-2lgwx\") pod \"obo-prometheus-operator-668cf9dfbb-rxhl2\" (UID: \"b7c4b477-dfcc-4cf5-ac76-eef0917d866c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.611755 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z"] Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.612456 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.657430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lgwx\" (UniqueName: \"kubernetes.io/projected/b7c4b477-dfcc-4cf5-ac76-eef0917d866c-kube-api-access-2lgwx\") pod \"obo-prometheus-operator-668cf9dfbb-rxhl2\" (UID: \"b7c4b477-dfcc-4cf5-ac76-eef0917d866c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.709408 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f2258805-8f2d-44e0-adc5-18e68c485378-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8\" (UID: \"f2258805-8f2d-44e0-adc5-18e68c485378\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.709478 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9691dad-91e9-4701-bc56-f92b96693c15-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z\" (UID: \"a9691dad-91e9-4701-bc56-f92b96693c15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.709501 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9691dad-91e9-4701-bc56-f92b96693c15-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z\" (UID: \"a9691dad-91e9-4701-bc56-f92b96693c15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.709623 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f2258805-8f2d-44e0-adc5-18e68c485378-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8\" (UID: \"f2258805-8f2d-44e0-adc5-18e68c485378\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.784222 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.806609 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-8tk4t"] Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.807441 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.810218 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-956x7" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.810387 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.810544 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9691dad-91e9-4701-bc56-f92b96693c15-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z\" (UID: \"a9691dad-91e9-4701-bc56-f92b96693c15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.810594 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9691dad-91e9-4701-bc56-f92b96693c15-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z\" (UID: \"a9691dad-91e9-4701-bc56-f92b96693c15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.810617 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f2258805-8f2d-44e0-adc5-18e68c485378-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8\" (UID: \"f2258805-8f2d-44e0-adc5-18e68c485378\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.810650 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dtn2\" (UniqueName: \"kubernetes.io/projected/2092c64d-e6a4-4d8a-9e92-65ea330e7ef0-kube-api-access-6dtn2\") pod \"observability-operator-d8bb48f5d-8tk4t\" (UID: \"2092c64d-e6a4-4d8a-9e92-65ea330e7ef0\") " pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.810724 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2092c64d-e6a4-4d8a-9e92-65ea330e7ef0-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-8tk4t\" (UID: \"2092c64d-e6a4-4d8a-9e92-65ea330e7ef0\") " pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.810745 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f2258805-8f2d-44e0-adc5-18e68c485378-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8\" (UID: \"f2258805-8f2d-44e0-adc5-18e68c485378\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.819375 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f2258805-8f2d-44e0-adc5-18e68c485378-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8\" (UID: \"f2258805-8f2d-44e0-adc5-18e68c485378\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.820993 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9691dad-91e9-4701-bc56-f92b96693c15-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z\" (UID: \"a9691dad-91e9-4701-bc56-f92b96693c15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.825379 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f2258805-8f2d-44e0-adc5-18e68c485378-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8\" (UID: \"f2258805-8f2d-44e0-adc5-18e68c485378\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.825458 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9691dad-91e9-4701-bc56-f92b96693c15-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z\" (UID: \"a9691dad-91e9-4701-bc56-f92b96693c15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.828799 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(5130a8848f6622df6f4a50dccdae207caeba9b9bedf698907d1eea62302b846e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.828924 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(5130a8848f6622df6f4a50dccdae207caeba9b9bedf698907d1eea62302b846e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.829075 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(5130a8848f6622df6f4a50dccdae207caeba9b9bedf698907d1eea62302b846e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.829186 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators(b7c4b477-dfcc-4cf5-ac76-eef0917d866c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators(b7c4b477-dfcc-4cf5-ac76-eef0917d866c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(5130a8848f6622df6f4a50dccdae207caeba9b9bedf698907d1eea62302b846e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" podUID="b7c4b477-dfcc-4cf5-ac76-eef0917d866c" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.912652 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dtn2\" (UniqueName: \"kubernetes.io/projected/2092c64d-e6a4-4d8a-9e92-65ea330e7ef0-kube-api-access-6dtn2\") pod \"observability-operator-d8bb48f5d-8tk4t\" (UID: \"2092c64d-e6a4-4d8a-9e92-65ea330e7ef0\") " pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.912734 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2092c64d-e6a4-4d8a-9e92-65ea330e7ef0-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-8tk4t\" (UID: \"2092c64d-e6a4-4d8a-9e92-65ea330e7ef0\") " pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.916017 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.916444 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2092c64d-e6a4-4d8a-9e92-65ea330e7ef0-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-8tk4t\" (UID: \"2092c64d-e6a4-4d8a-9e92-65ea330e7ef0\") " pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.928437 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:16 crc kubenswrapper[4727]: I1121 20:16:16.936157 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dtn2\" (UniqueName: \"kubernetes.io/projected/2092c64d-e6a4-4d8a-9e92-65ea330e7ef0-kube-api-access-6dtn2\") pod \"observability-operator-d8bb48f5d-8tk4t\" (UID: \"2092c64d-e6a4-4d8a-9e92-65ea330e7ef0\") " pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.956949 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(dbba623e4a3ab84856d9d0c6d6ecd23ef480c148e0530a9fb76528e3a91f23af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.957115 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(dbba623e4a3ab84856d9d0c6d6ecd23ef480c148e0530a9fb76528e3a91f23af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.957198 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(dbba623e4a3ab84856d9d0c6d6ecd23ef480c148e0530a9fb76528e3a91f23af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.957309 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators(f2258805-8f2d-44e0-adc5-18e68c485378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators(f2258805-8f2d-44e0-adc5-18e68c485378)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(dbba623e4a3ab84856d9d0c6d6ecd23ef480c148e0530a9fb76528e3a91f23af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" podUID="f2258805-8f2d-44e0-adc5-18e68c485378" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.969588 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(5015fc9294251ce86b4648ec157e5158ea5cfefea1a7544949a79dd1913d1c5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.969658 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(5015fc9294251ce86b4648ec157e5158ea5cfefea1a7544949a79dd1913d1c5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.969679 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(5015fc9294251ce86b4648ec157e5158ea5cfefea1a7544949a79dd1913d1c5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:16 crc kubenswrapper[4727]: E1121 20:16:16.969726 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators(a9691dad-91e9-4701-bc56-f92b96693c15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators(a9691dad-91e9-4701-bc56-f92b96693c15)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(5015fc9294251ce86b4648ec157e5158ea5cfefea1a7544949a79dd1913d1c5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" podUID="a9691dad-91e9-4701-bc56-f92b96693c15" Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.000990 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-c9l74"] Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.001749 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.004429 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-chblt" Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.036599 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/30b54f9a-8377-4b99-92fe-ccbef59d7c7a-openshift-service-ca\") pod \"perses-operator-5446b9c989-c9l74\" (UID: \"30b54f9a-8377-4b99-92fe-ccbef59d7c7a\") " pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.036659 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5crc6\" (UniqueName: \"kubernetes.io/projected/30b54f9a-8377-4b99-92fe-ccbef59d7c7a-kube-api-access-5crc6\") pod \"perses-operator-5446b9c989-c9l74\" (UID: \"30b54f9a-8377-4b99-92fe-ccbef59d7c7a\") " pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.137834 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/30b54f9a-8377-4b99-92fe-ccbef59d7c7a-openshift-service-ca\") pod \"perses-operator-5446b9c989-c9l74\" (UID: \"30b54f9a-8377-4b99-92fe-ccbef59d7c7a\") " pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.137912 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5crc6\" (UniqueName: \"kubernetes.io/projected/30b54f9a-8377-4b99-92fe-ccbef59d7c7a-kube-api-access-5crc6\") pod \"perses-operator-5446b9c989-c9l74\" (UID: \"30b54f9a-8377-4b99-92fe-ccbef59d7c7a\") " pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.138946 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/30b54f9a-8377-4b99-92fe-ccbef59d7c7a-openshift-service-ca\") pod \"perses-operator-5446b9c989-c9l74\" (UID: \"30b54f9a-8377-4b99-92fe-ccbef59d7c7a\") " pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.158840 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5crc6\" (UniqueName: \"kubernetes.io/projected/30b54f9a-8377-4b99-92fe-ccbef59d7c7a-kube-api-access-5crc6\") pod \"perses-operator-5446b9c989-c9l74\" (UID: \"30b54f9a-8377-4b99-92fe-ccbef59d7c7a\") " pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.176514 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:17 crc kubenswrapper[4727]: E1121 20:16:17.215555 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(31f1108b3c303bc4f20287bbe84622c1b6669c3323eb7352a18714de42a5b5c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:17 crc kubenswrapper[4727]: E1121 20:16:17.215725 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(31f1108b3c303bc4f20287bbe84622c1b6669c3323eb7352a18714de42a5b5c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:17 crc kubenswrapper[4727]: E1121 20:16:17.215814 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(31f1108b3c303bc4f20287bbe84622c1b6669c3323eb7352a18714de42a5b5c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:17 crc kubenswrapper[4727]: E1121 20:16:17.215930 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-8tk4t_openshift-operators(2092c64d-e6a4-4d8a-9e92-65ea330e7ef0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-8tk4t_openshift-operators(2092c64d-e6a4-4d8a-9e92-65ea330e7ef0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(31f1108b3c303bc4f20287bbe84622c1b6669c3323eb7352a18714de42a5b5c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" podUID="2092c64d-e6a4-4d8a-9e92-65ea330e7ef0" Nov 21 20:16:17 crc kubenswrapper[4727]: I1121 20:16:17.353334 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:17 crc kubenswrapper[4727]: E1121 20:16:17.377171 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(1087eb4de8a51328b88cf3574c49e5593b3438c6a056242c52b972a681cbd169): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:17 crc kubenswrapper[4727]: E1121 20:16:17.377404 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(1087eb4de8a51328b88cf3574c49e5593b3438c6a056242c52b972a681cbd169): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:17 crc kubenswrapper[4727]: E1121 20:16:17.377501 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(1087eb4de8a51328b88cf3574c49e5593b3438c6a056242c52b972a681cbd169): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:17 crc kubenswrapper[4727]: E1121 20:16:17.377630 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-c9l74_openshift-operators(30b54f9a-8377-4b99-92fe-ccbef59d7c7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-c9l74_openshift-operators(30b54f9a-8377-4b99-92fe-ccbef59d7c7a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(1087eb4de8a51328b88cf3574c49e5593b3438c6a056242c52b972a681cbd169): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-c9l74" podUID="30b54f9a-8377-4b99-92fe-ccbef59d7c7a" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.529892 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" event={"ID":"30dcc4a5-e602-4b92-be89-b83fc65e94b0","Type":"ContainerStarted","Data":"4df7444edd63d7c8a34a9048d7db6bd028c1b191a7136d9be695133645dcf894"} Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.530449 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.530461 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.530470 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.563255 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.565260 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" podStartSLOduration=7.565243214 podStartE2EDuration="7.565243214s" podCreationTimestamp="2025-11-21 20:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:16:18.561105619 +0000 UTC m=+583.747290673" watchObservedRunningTime="2025-11-21 20:16:18.565243214 +0000 UTC m=+583.751428258" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.565428 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.919654 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z"] Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.919813 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.920265 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.941076 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2"] Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.941178 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.941546 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.991571 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-c9l74"] Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.991694 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:18 crc kubenswrapper[4727]: I1121 20:16:18.992151 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:19 crc kubenswrapper[4727]: I1121 20:16:19.005007 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-8tk4t"] Nov 21 20:16:19 crc kubenswrapper[4727]: I1121 20:16:19.005326 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:19 crc kubenswrapper[4727]: I1121 20:16:19.005884 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:19 crc kubenswrapper[4727]: I1121 20:16:19.016229 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8"] Nov 21 20:16:19 crc kubenswrapper[4727]: I1121 20:16:19.016342 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:19 crc kubenswrapper[4727]: I1121 20:16:19.016752 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.054124 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(f24fbbfd3a9e4b05a4667da7f174b4bb2a2a16d932246f2271eb0de97d790480): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.054204 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(f24fbbfd3a9e4b05a4667da7f174b4bb2a2a16d932246f2271eb0de97d790480): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.054233 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(f24fbbfd3a9e4b05a4667da7f174b4bb2a2a16d932246f2271eb0de97d790480): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.054290 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators(a9691dad-91e9-4701-bc56-f92b96693c15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators(a9691dad-91e9-4701-bc56-f92b96693c15)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(f24fbbfd3a9e4b05a4667da7f174b4bb2a2a16d932246f2271eb0de97d790480): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" podUID="a9691dad-91e9-4701-bc56-f92b96693c15" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.120550 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(a03ca0ff7a3718714983339badcc60ea10a84942fc3c0330a75dc021525b3456): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.120627 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(a03ca0ff7a3718714983339badcc60ea10a84942fc3c0330a75dc021525b3456): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.120654 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(a03ca0ff7a3718714983339badcc60ea10a84942fc3c0330a75dc021525b3456): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.120715 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators(b7c4b477-dfcc-4cf5-ac76-eef0917d866c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators(b7c4b477-dfcc-4cf5-ac76-eef0917d866c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(a03ca0ff7a3718714983339badcc60ea10a84942fc3c0330a75dc021525b3456): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" podUID="b7c4b477-dfcc-4cf5-ac76-eef0917d866c" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.127351 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(fb57e00634ee1d66ac4d023a42dcfbc8aeafb47756712b22efd2fda97f6a4067): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.127881 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(fb57e00634ee1d66ac4d023a42dcfbc8aeafb47756712b22efd2fda97f6a4067): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.128034 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(fb57e00634ee1d66ac4d023a42dcfbc8aeafb47756712b22efd2fda97f6a4067): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.128257 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-c9l74_openshift-operators(30b54f9a-8377-4b99-92fe-ccbef59d7c7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-c9l74_openshift-operators(30b54f9a-8377-4b99-92fe-ccbef59d7c7a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(fb57e00634ee1d66ac4d023a42dcfbc8aeafb47756712b22efd2fda97f6a4067): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-c9l74" podUID="30b54f9a-8377-4b99-92fe-ccbef59d7c7a" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.133115 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(1dab602a9a6a8d78805241cfca6404280d5fd4afea41e4f3b765ceb4ea8a906a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.133172 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(1dab602a9a6a8d78805241cfca6404280d5fd4afea41e4f3b765ceb4ea8a906a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.133196 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(1dab602a9a6a8d78805241cfca6404280d5fd4afea41e4f3b765ceb4ea8a906a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.133240 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-8tk4t_openshift-operators(2092c64d-e6a4-4d8a-9e92-65ea330e7ef0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-8tk4t_openshift-operators(2092c64d-e6a4-4d8a-9e92-65ea330e7ef0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(1dab602a9a6a8d78805241cfca6404280d5fd4afea41e4f3b765ceb4ea8a906a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" podUID="2092c64d-e6a4-4d8a-9e92-65ea330e7ef0" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.139474 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(3501e32715246fb5f5eee10cc0e633996f2adb50e1e06cf6332aae3c4e968c5f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.139638 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(3501e32715246fb5f5eee10cc0e633996f2adb50e1e06cf6332aae3c4e968c5f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.139746 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(3501e32715246fb5f5eee10cc0e633996f2adb50e1e06cf6332aae3c4e968c5f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:19 crc kubenswrapper[4727]: E1121 20:16:19.139879 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators(f2258805-8f2d-44e0-adc5-18e68c485378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators(f2258805-8f2d-44e0-adc5-18e68c485378)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(3501e32715246fb5f5eee10cc0e633996f2adb50e1e06cf6332aae3c4e968c5f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" podUID="f2258805-8f2d-44e0-adc5-18e68c485378" Nov 21 20:16:23 crc kubenswrapper[4727]: I1121 20:16:23.499184 4727 scope.go:117] "RemoveContainer" containerID="c0d56f4e8a0bf7ace78cc9404a73f24132ec2d0b20654b1aa5db4cab0db74936" Nov 21 20:16:23 crc kubenswrapper[4727]: E1121 20:16:23.499830 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7rvdc_openshift-multus(07dba644-eb6f-45c3-b373-7a1610c569aa)\"" pod="openshift-multus/multus-7rvdc" podUID="07dba644-eb6f-45c3-b373-7a1610c569aa" Nov 21 20:16:29 crc kubenswrapper[4727]: I1121 20:16:29.499085 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:29 crc kubenswrapper[4727]: I1121 20:16:29.500116 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:29 crc kubenswrapper[4727]: E1121 20:16:29.522398 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(6e295146b1c6ab9a7bc4e1cbf7faae77d234afeacb19ca4afa21d67511eb0cf6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:29 crc kubenswrapper[4727]: E1121 20:16:29.522462 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(6e295146b1c6ab9a7bc4e1cbf7faae77d234afeacb19ca4afa21d67511eb0cf6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:29 crc kubenswrapper[4727]: E1121 20:16:29.522483 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(6e295146b1c6ab9a7bc4e1cbf7faae77d234afeacb19ca4afa21d67511eb0cf6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:29 crc kubenswrapper[4727]: E1121 20:16:29.522528 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-c9l74_openshift-operators(30b54f9a-8377-4b99-92fe-ccbef59d7c7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-c9l74_openshift-operators(30b54f9a-8377-4b99-92fe-ccbef59d7c7a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-c9l74_openshift-operators_30b54f9a-8377-4b99-92fe-ccbef59d7c7a_0(6e295146b1c6ab9a7bc4e1cbf7faae77d234afeacb19ca4afa21d67511eb0cf6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-c9l74" podUID="30b54f9a-8377-4b99-92fe-ccbef59d7c7a" Nov 21 20:16:30 crc kubenswrapper[4727]: I1121 20:16:30.498933 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:30 crc kubenswrapper[4727]: I1121 20:16:30.499943 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:30 crc kubenswrapper[4727]: E1121 20:16:30.526882 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(cdf4fec63da3f5391fcc0b86ce615afdf02b5f4eb9d44d6fe5d6f323d1384b5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:30 crc kubenswrapper[4727]: E1121 20:16:30.527000 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(cdf4fec63da3f5391fcc0b86ce615afdf02b5f4eb9d44d6fe5d6f323d1384b5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:30 crc kubenswrapper[4727]: E1121 20:16:30.527035 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(cdf4fec63da3f5391fcc0b86ce615afdf02b5f4eb9d44d6fe5d6f323d1384b5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:30 crc kubenswrapper[4727]: E1121 20:16:30.527102 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators(f2258805-8f2d-44e0-adc5-18e68c485378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators(f2258805-8f2d-44e0-adc5-18e68c485378)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_openshift-operators_f2258805-8f2d-44e0-adc5-18e68c485378_0(cdf4fec63da3f5391fcc0b86ce615afdf02b5f4eb9d44d6fe5d6f323d1384b5e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" podUID="f2258805-8f2d-44e0-adc5-18e68c485378" Nov 21 20:16:33 crc kubenswrapper[4727]: I1121 20:16:33.498499 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:33 crc kubenswrapper[4727]: I1121 20:16:33.499414 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:33 crc kubenswrapper[4727]: E1121 20:16:33.530498 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(158679497dbae0ab1620d53576b72483de12dbb50fd6c43da291313b277239c6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:33 crc kubenswrapper[4727]: E1121 20:16:33.530915 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(158679497dbae0ab1620d53576b72483de12dbb50fd6c43da291313b277239c6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:33 crc kubenswrapper[4727]: E1121 20:16:33.531017 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(158679497dbae0ab1620d53576b72483de12dbb50fd6c43da291313b277239c6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:33 crc kubenswrapper[4727]: E1121 20:16:33.531084 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators(a9691dad-91e9-4701-bc56-f92b96693c15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators(a9691dad-91e9-4701-bc56-f92b96693c15)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_openshift-operators_a9691dad-91e9-4701-bc56-f92b96693c15_0(158679497dbae0ab1620d53576b72483de12dbb50fd6c43da291313b277239c6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" podUID="a9691dad-91e9-4701-bc56-f92b96693c15" Nov 21 20:16:34 crc kubenswrapper[4727]: I1121 20:16:34.498415 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:34 crc kubenswrapper[4727]: I1121 20:16:34.498542 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:34 crc kubenswrapper[4727]: I1121 20:16:34.498927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:34 crc kubenswrapper[4727]: I1121 20:16:34.499467 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:34 crc kubenswrapper[4727]: E1121 20:16:34.548977 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(ca945e7f65bcb4f3496b8145eda92c344356d3bb2cb0b02736ae9920ba16627a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:34 crc kubenswrapper[4727]: E1121 20:16:34.549062 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(ca945e7f65bcb4f3496b8145eda92c344356d3bb2cb0b02736ae9920ba16627a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:34 crc kubenswrapper[4727]: E1121 20:16:34.549103 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(ca945e7f65bcb4f3496b8145eda92c344356d3bb2cb0b02736ae9920ba16627a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:34 crc kubenswrapper[4727]: E1121 20:16:34.549168 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-8tk4t_openshift-operators(2092c64d-e6a4-4d8a-9e92-65ea330e7ef0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-8tk4t_openshift-operators(2092c64d-e6a4-4d8a-9e92-65ea330e7ef0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-8tk4t_openshift-operators_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0_0(ca945e7f65bcb4f3496b8145eda92c344356d3bb2cb0b02736ae9920ba16627a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" podUID="2092c64d-e6a4-4d8a-9e92-65ea330e7ef0" Nov 21 20:16:34 crc kubenswrapper[4727]: E1121 20:16:34.557141 4727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(2724d24b1f0390633607d4406da8649b374420191261b54535c7d35371f67c0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 20:16:34 crc kubenswrapper[4727]: E1121 20:16:34.557203 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(2724d24b1f0390633607d4406da8649b374420191261b54535c7d35371f67c0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:34 crc kubenswrapper[4727]: E1121 20:16:34.557227 4727 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(2724d24b1f0390633607d4406da8649b374420191261b54535c7d35371f67c0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:34 crc kubenswrapper[4727]: E1121 20:16:34.557274 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators(b7c4b477-dfcc-4cf5-ac76-eef0917d866c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators(b7c4b477-dfcc-4cf5-ac76-eef0917d866c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-rxhl2_openshift-operators_b7c4b477-dfcc-4cf5-ac76-eef0917d866c_0(2724d24b1f0390633607d4406da8649b374420191261b54535c7d35371f67c0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" podUID="b7c4b477-dfcc-4cf5-ac76-eef0917d866c" Nov 21 20:16:38 crc kubenswrapper[4727]: I1121 20:16:38.499910 4727 scope.go:117] "RemoveContainer" containerID="c0d56f4e8a0bf7ace78cc9404a73f24132ec2d0b20654b1aa5db4cab0db74936" Nov 21 20:16:39 crc kubenswrapper[4727]: I1121 20:16:39.664910 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7rvdc_07dba644-eb6f-45c3-b373-7a1610c569aa/kube-multus/2.log" Nov 21 20:16:39 crc kubenswrapper[4727]: I1121 20:16:39.665268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7rvdc" event={"ID":"07dba644-eb6f-45c3-b373-7a1610c569aa","Type":"ContainerStarted","Data":"ebb302ef02d38ae48d9eb279d4ed831dbf9a3fde14f7dc3068a1d24311db47c5"} Nov 21 20:16:40 crc kubenswrapper[4727]: I1121 20:16:40.499013 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:40 crc kubenswrapper[4727]: I1121 20:16:40.500995 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:40 crc kubenswrapper[4727]: I1121 20:16:40.955804 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-c9l74"] Nov 21 20:16:40 crc kubenswrapper[4727]: W1121 20:16:40.961176 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30b54f9a_8377_4b99_92fe_ccbef59d7c7a.slice/crio-76878b70da722a3b4c3add3c8e6c05c3aa37c47ff88a9fb5ca6961af1176c56c WatchSource:0}: Error finding container 76878b70da722a3b4c3add3c8e6c05c3aa37c47ff88a9fb5ca6961af1176c56c: Status 404 returned error can't find the container with id 76878b70da722a3b4c3add3c8e6c05c3aa37c47ff88a9fb5ca6961af1176c56c Nov 21 20:16:41 crc kubenswrapper[4727]: I1121 20:16:41.677323 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-c9l74" event={"ID":"30b54f9a-8377-4b99-92fe-ccbef59d7c7a","Type":"ContainerStarted","Data":"76878b70da722a3b4c3add3c8e6c05c3aa37c47ff88a9fb5ca6961af1176c56c"} Nov 21 20:16:41 crc kubenswrapper[4727]: I1121 20:16:41.952288 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9qh6" Nov 21 20:16:42 crc kubenswrapper[4727]: I1121 20:16:42.499126 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:42 crc kubenswrapper[4727]: I1121 20:16:42.499845 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" Nov 21 20:16:42 crc kubenswrapper[4727]: I1121 20:16:42.899950 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8"] Nov 21 20:16:42 crc kubenswrapper[4727]: W1121 20:16:42.904480 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2258805_8f2d_44e0_adc5_18e68c485378.slice/crio-1b44db8714a46c8a93abeef4fabcc68958d3898cf68ba38e46505c3315a041ed WatchSource:0}: Error finding container 1b44db8714a46c8a93abeef4fabcc68958d3898cf68ba38e46505c3315a041ed: Status 404 returned error can't find the container with id 1b44db8714a46c8a93abeef4fabcc68958d3898cf68ba38e46505c3315a041ed Nov 21 20:16:43 crc kubenswrapper[4727]: I1121 20:16:43.335718 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:16:43 crc kubenswrapper[4727]: I1121 20:16:43.335789 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:16:43 crc kubenswrapper[4727]: I1121 20:16:43.335855 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:16:43 crc kubenswrapper[4727]: I1121 20:16:43.337237 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31ac78ad9674edf30089e0f6eccedebce8b14f0ff1682633a29c88cf87b54891"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:16:43 crc kubenswrapper[4727]: I1121 20:16:43.337341 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://31ac78ad9674edf30089e0f6eccedebce8b14f0ff1682633a29c88cf87b54891" gracePeriod=600 Nov 21 20:16:43 crc kubenswrapper[4727]: I1121 20:16:43.697579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" event={"ID":"f2258805-8f2d-44e0-adc5-18e68c485378","Type":"ContainerStarted","Data":"1b44db8714a46c8a93abeef4fabcc68958d3898cf68ba38e46505c3315a041ed"} Nov 21 20:16:43 crc kubenswrapper[4727]: I1121 20:16:43.701644 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="31ac78ad9674edf30089e0f6eccedebce8b14f0ff1682633a29c88cf87b54891" exitCode=0 Nov 21 20:16:43 crc kubenswrapper[4727]: I1121 20:16:43.701688 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"31ac78ad9674edf30089e0f6eccedebce8b14f0ff1682633a29c88cf87b54891"} Nov 21 20:16:43 crc kubenswrapper[4727]: I1121 20:16:43.701716 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"8107c47f566f1faaec586576a9a03e2bdc957a4e69dfa96e7c5b9dd43a6f4ab5"} Nov 21 20:16:43 crc kubenswrapper[4727]: I1121 20:16:43.701732 4727 scope.go:117] "RemoveContainer" containerID="7b64e1aad32276f0362b5ee96e2453c18f2a0e0c6f7ada508ab81eab2ebb4231" Nov 21 20:16:45 crc kubenswrapper[4727]: I1121 20:16:45.500719 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:45 crc kubenswrapper[4727]: I1121 20:16:45.503336 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.040513 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2"] Nov 21 20:16:47 crc kubenswrapper[4727]: W1121 20:16:47.289597 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7c4b477_dfcc_4cf5_ac76_eef0917d866c.slice/crio-197e46aa0b3b85a99cb31d82c31f7d91000f98a229d39000f44f640ec81a2f0d WatchSource:0}: Error finding container 197e46aa0b3b85a99cb31d82c31f7d91000f98a229d39000f44f640ec81a2f0d: Status 404 returned error can't find the container with id 197e46aa0b3b85a99cb31d82c31f7d91000f98a229d39000f44f640ec81a2f0d Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.499125 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.499418 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.499642 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.499847 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.753213 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" event={"ID":"b7c4b477-dfcc-4cf5-ac76-eef0917d866c","Type":"ContainerStarted","Data":"197e46aa0b3b85a99cb31d82c31f7d91000f98a229d39000f44f640ec81a2f0d"} Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.755572 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" event={"ID":"f2258805-8f2d-44e0-adc5-18e68c485378","Type":"ContainerStarted","Data":"830f779e50c7ecaec5c1d107f229d478665786274841d752536b533a0eed34d1"} Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.761040 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-c9l74" event={"ID":"30b54f9a-8377-4b99-92fe-ccbef59d7c7a","Type":"ContainerStarted","Data":"635bdb97b1b92464ce9a65a54f81ce2ef3c61f8e4d1fd13a55b11facf6a2ed30"} Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.761687 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.780409 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-jx5h8" podStartSLOduration=27.334643271 podStartE2EDuration="31.78039291s" podCreationTimestamp="2025-11-21 20:16:16 +0000 UTC" firstStartedPulling="2025-11-21 20:16:42.908134854 +0000 UTC m=+608.094319888" lastFinishedPulling="2025-11-21 20:16:47.353884473 +0000 UTC m=+612.540069527" observedRunningTime="2025-11-21 20:16:47.776355704 +0000 UTC m=+612.962540748" watchObservedRunningTime="2025-11-21 20:16:47.78039291 +0000 UTC m=+612.966577954" Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.840774 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-c9l74" podStartSLOduration=25.456024855 podStartE2EDuration="31.84075389s" podCreationTimestamp="2025-11-21 20:16:16 +0000 UTC" firstStartedPulling="2025-11-21 20:16:40.964091465 +0000 UTC m=+606.150276509" lastFinishedPulling="2025-11-21 20:16:47.34882049 +0000 UTC m=+612.535005544" observedRunningTime="2025-11-21 20:16:47.803038025 +0000 UTC m=+612.989223069" watchObservedRunningTime="2025-11-21 20:16:47.84075389 +0000 UTC m=+613.026938934" Nov 21 20:16:47 crc kubenswrapper[4727]: I1121 20:16:47.841693 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-8tk4t"] Nov 21 20:16:47 crc kubenswrapper[4727]: W1121 20:16:47.855245 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2092c64d_e6a4_4d8a_9e92_65ea330e7ef0.slice/crio-40c8c80314108cb4b482b50a7a8f99a9071244e5e297c60bc23c9dc5fe8f49cb WatchSource:0}: Error finding container 40c8c80314108cb4b482b50a7a8f99a9071244e5e297c60bc23c9dc5fe8f49cb: Status 404 returned error can't find the container with id 40c8c80314108cb4b482b50a7a8f99a9071244e5e297c60bc23c9dc5fe8f49cb Nov 21 20:16:48 crc kubenswrapper[4727]: I1121 20:16:48.096756 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z"] Nov 21 20:16:48 crc kubenswrapper[4727]: I1121 20:16:48.766655 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" event={"ID":"a9691dad-91e9-4701-bc56-f92b96693c15","Type":"ContainerStarted","Data":"2fe96bf482b90188bd2a16e501be6d676afc6817e6f0e55041904fb5d4bab657"} Nov 21 20:16:48 crc kubenswrapper[4727]: I1121 20:16:48.766990 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" event={"ID":"a9691dad-91e9-4701-bc56-f92b96693c15","Type":"ContainerStarted","Data":"8be6f9addb4c201cab3a9e56b94e6c3d69e05f2cf8082dc8c5ab0993c5c72077"} Nov 21 20:16:48 crc kubenswrapper[4727]: I1121 20:16:48.768167 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" event={"ID":"2092c64d-e6a4-4d8a-9e92-65ea330e7ef0","Type":"ContainerStarted","Data":"40c8c80314108cb4b482b50a7a8f99a9071244e5e297c60bc23c9dc5fe8f49cb"} Nov 21 20:16:48 crc kubenswrapper[4727]: I1121 20:16:48.792683 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77844f949f-d9l9z" podStartSLOduration=32.792660056 podStartE2EDuration="32.792660056s" podCreationTimestamp="2025-11-21 20:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:16:48.779927959 +0000 UTC m=+613.966113013" watchObservedRunningTime="2025-11-21 20:16:48.792660056 +0000 UTC m=+613.978845110" Nov 21 20:16:50 crc kubenswrapper[4727]: I1121 20:16:50.781503 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" event={"ID":"b7c4b477-dfcc-4cf5-ac76-eef0917d866c","Type":"ContainerStarted","Data":"436151acad0a75541e54e394d8f001d9f91ae89fa087ea9c6fe060733f15d289"} Nov 21 20:16:50 crc kubenswrapper[4727]: I1121 20:16:50.801842 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-rxhl2" podStartSLOduration=32.571439339 podStartE2EDuration="34.801826099s" podCreationTimestamp="2025-11-21 20:16:16 +0000 UTC" firstStartedPulling="2025-11-21 20:16:47.292597976 +0000 UTC m=+612.478783020" lastFinishedPulling="2025-11-21 20:16:49.522984736 +0000 UTC m=+614.709169780" observedRunningTime="2025-11-21 20:16:50.798151941 +0000 UTC m=+615.984336985" watchObservedRunningTime="2025-11-21 20:16:50.801826099 +0000 UTC m=+615.988011143" Nov 21 20:16:52 crc kubenswrapper[4727]: I1121 20:16:52.815275 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" event={"ID":"2092c64d-e6a4-4d8a-9e92-65ea330e7ef0","Type":"ContainerStarted","Data":"120d813488988229ef3f0ccbb9651ed124336228730dc3cd58176ff4d4c52328"} Nov 21 20:16:52 crc kubenswrapper[4727]: I1121 20:16:52.815840 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:52 crc kubenswrapper[4727]: I1121 20:16:52.834904 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" podStartSLOduration=32.419990441 podStartE2EDuration="36.834886112s" podCreationTimestamp="2025-11-21 20:16:16 +0000 UTC" firstStartedPulling="2025-11-21 20:16:47.862215239 +0000 UTC m=+613.048400283" lastFinishedPulling="2025-11-21 20:16:52.27711091 +0000 UTC m=+617.463295954" observedRunningTime="2025-11-21 20:16:52.832617509 +0000 UTC m=+618.018802573" watchObservedRunningTime="2025-11-21 20:16:52.834886112 +0000 UTC m=+618.021071156" Nov 21 20:16:52 crc kubenswrapper[4727]: I1121 20:16:52.889692 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-8tk4t" Nov 21 20:16:57 crc kubenswrapper[4727]: I1121 20:16:57.356197 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-c9l74" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.751382 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-7mdx6"] Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.757458 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-7mdx6" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.763072 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dbtqb"] Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.764292 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-dbtqb" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.765379 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-7mdx6"] Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.768675 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.771648 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wtbdd" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.771893 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.772974 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zfqbn" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.773064 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-rttbv"] Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.773778 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-rttbv" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.778179 4727 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7w59q" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.788202 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dbtqb"] Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.797530 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-rttbv"] Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.858587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhzr\" (UniqueName: \"kubernetes.io/projected/73762fdf-4a73-497f-b183-fb15b1c7e8b5-kube-api-access-jrhzr\") pod \"cert-manager-5b446d88c5-dbtqb\" (UID: \"73762fdf-4a73-497f-b183-fb15b1c7e8b5\") " pod="cert-manager/cert-manager-5b446d88c5-dbtqb" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.858699 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbfk2\" (UniqueName: \"kubernetes.io/projected/b2bd8576-daa6-4408-a4e1-e9b4824db5ff-kube-api-access-sbfk2\") pod \"cert-manager-cainjector-7f985d654d-7mdx6\" (UID: \"b2bd8576-daa6-4408-a4e1-e9b4824db5ff\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-7mdx6" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.960228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhzr\" (UniqueName: \"kubernetes.io/projected/73762fdf-4a73-497f-b183-fb15b1c7e8b5-kube-api-access-jrhzr\") pod \"cert-manager-5b446d88c5-dbtqb\" (UID: \"73762fdf-4a73-497f-b183-fb15b1c7e8b5\") " pod="cert-manager/cert-manager-5b446d88c5-dbtqb" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.960291 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbfk2\" (UniqueName: \"kubernetes.io/projected/b2bd8576-daa6-4408-a4e1-e9b4824db5ff-kube-api-access-sbfk2\") pod \"cert-manager-cainjector-7f985d654d-7mdx6\" (UID: \"b2bd8576-daa6-4408-a4e1-e9b4824db5ff\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-7mdx6" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.960352 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbxtr\" (UniqueName: \"kubernetes.io/projected/5ec4f5f1-9157-49cf-bbea-d8d215df5440-kube-api-access-nbxtr\") pod \"cert-manager-webhook-5655c58dd6-rttbv\" (UID: \"5ec4f5f1-9157-49cf-bbea-d8d215df5440\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-rttbv" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.978621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhzr\" (UniqueName: \"kubernetes.io/projected/73762fdf-4a73-497f-b183-fb15b1c7e8b5-kube-api-access-jrhzr\") pod \"cert-manager-5b446d88c5-dbtqb\" (UID: \"73762fdf-4a73-497f-b183-fb15b1c7e8b5\") " pod="cert-manager/cert-manager-5b446d88c5-dbtqb" Nov 21 20:17:02 crc kubenswrapper[4727]: I1121 20:17:02.982641 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbfk2\" (UniqueName: \"kubernetes.io/projected/b2bd8576-daa6-4408-a4e1-e9b4824db5ff-kube-api-access-sbfk2\") pod \"cert-manager-cainjector-7f985d654d-7mdx6\" (UID: \"b2bd8576-daa6-4408-a4e1-e9b4824db5ff\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-7mdx6" Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.061497 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbxtr\" (UniqueName: \"kubernetes.io/projected/5ec4f5f1-9157-49cf-bbea-d8d215df5440-kube-api-access-nbxtr\") pod \"cert-manager-webhook-5655c58dd6-rttbv\" (UID: \"5ec4f5f1-9157-49cf-bbea-d8d215df5440\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-rttbv" Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.083576 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbxtr\" (UniqueName: \"kubernetes.io/projected/5ec4f5f1-9157-49cf-bbea-d8d215df5440-kube-api-access-nbxtr\") pod \"cert-manager-webhook-5655c58dd6-rttbv\" (UID: \"5ec4f5f1-9157-49cf-bbea-d8d215df5440\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-rttbv" Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.100237 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-dbtqb" Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.111884 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-7mdx6" Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.127061 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-rttbv" Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.381074 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-rttbv"] Nov 21 20:17:03 crc kubenswrapper[4727]: W1121 20:17:03.385095 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ec4f5f1_9157_49cf_bbea_d8d215df5440.slice/crio-347657ce881edea6ccf875834f855ef39280149da61be3cb81ca3514784316db WatchSource:0}: Error finding container 347657ce881edea6ccf875834f855ef39280149da61be3cb81ca3514784316db: Status 404 returned error can't find the container with id 347657ce881edea6ccf875834f855ef39280149da61be3cb81ca3514784316db Nov 21 20:17:03 crc kubenswrapper[4727]: W1121 20:17:03.637129 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2bd8576_daa6_4408_a4e1_e9b4824db5ff.slice/crio-73933d512142c25642da4270ec00407b6fbd4614cc6aa6113dc9c9cd817c325b WatchSource:0}: Error finding container 73933d512142c25642da4270ec00407b6fbd4614cc6aa6113dc9c9cd817c325b: Status 404 returned error can't find the container with id 73933d512142c25642da4270ec00407b6fbd4614cc6aa6113dc9c9cd817c325b Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.640869 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-7mdx6"] Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.645923 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dbtqb"] Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.878524 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-rttbv" event={"ID":"5ec4f5f1-9157-49cf-bbea-d8d215df5440","Type":"ContainerStarted","Data":"347657ce881edea6ccf875834f855ef39280149da61be3cb81ca3514784316db"} Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.879975 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-dbtqb" event={"ID":"73762fdf-4a73-497f-b183-fb15b1c7e8b5","Type":"ContainerStarted","Data":"1f5ee00e1bb8a51ad725a44f2cdc8afb05639e30ad58f43c06886d1113aa1503"} Nov 21 20:17:03 crc kubenswrapper[4727]: I1121 20:17:03.880870 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-7mdx6" event={"ID":"b2bd8576-daa6-4408-a4e1-e9b4824db5ff","Type":"ContainerStarted","Data":"73933d512142c25642da4270ec00407b6fbd4614cc6aa6113dc9c9cd817c325b"} Nov 21 20:17:05 crc kubenswrapper[4727]: I1121 20:17:05.906705 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-rttbv" event={"ID":"5ec4f5f1-9157-49cf-bbea-d8d215df5440","Type":"ContainerStarted","Data":"eb2a115cb88aeebd24502acf00dbcd962fbb91c3e15a98f8d2ecc25799e0a00d"} Nov 21 20:17:05 crc kubenswrapper[4727]: I1121 20:17:05.907924 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-rttbv" Nov 21 20:17:05 crc kubenswrapper[4727]: I1121 20:17:05.933994 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-rttbv" podStartSLOduration=1.861368443 podStartE2EDuration="3.933929406s" podCreationTimestamp="2025-11-21 20:17:02 +0000 UTC" firstStartedPulling="2025-11-21 20:17:03.387274197 +0000 UTC m=+628.573459241" lastFinishedPulling="2025-11-21 20:17:05.45983516 +0000 UTC m=+630.646020204" observedRunningTime="2025-11-21 20:17:05.923289891 +0000 UTC m=+631.109474935" watchObservedRunningTime="2025-11-21 20:17:05.933929406 +0000 UTC m=+631.120114450" Nov 21 20:17:07 crc kubenswrapper[4727]: I1121 20:17:07.922458 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-dbtqb" event={"ID":"73762fdf-4a73-497f-b183-fb15b1c7e8b5","Type":"ContainerStarted","Data":"b552bbba3adb5d6d651d542063648eaf64a29e03041f276eefe877eb43cf61f5"} Nov 21 20:17:07 crc kubenswrapper[4727]: I1121 20:17:07.926521 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-7mdx6" event={"ID":"b2bd8576-daa6-4408-a4e1-e9b4824db5ff","Type":"ContainerStarted","Data":"4ba203d6569e735c366d45f3d52171b99b976bc5c382574590f5c1edb002ad30"} Nov 21 20:17:07 crc kubenswrapper[4727]: I1121 20:17:07.941288 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-dbtqb" podStartSLOduration=2.363487234 podStartE2EDuration="5.941255051s" podCreationTimestamp="2025-11-21 20:17:02 +0000 UTC" firstStartedPulling="2025-11-21 20:17:03.642076202 +0000 UTC m=+628.828261246" lastFinishedPulling="2025-11-21 20:17:07.219844019 +0000 UTC m=+632.406029063" observedRunningTime="2025-11-21 20:17:07.940264067 +0000 UTC m=+633.126449131" watchObservedRunningTime="2025-11-21 20:17:07.941255051 +0000 UTC m=+633.127440135" Nov 21 20:17:07 crc kubenswrapper[4727]: I1121 20:17:07.957067 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-7mdx6" podStartSLOduration=2.37999816 podStartE2EDuration="5.95703624s" podCreationTimestamp="2025-11-21 20:17:02 +0000 UTC" firstStartedPulling="2025-11-21 20:17:03.638756612 +0000 UTC m=+628.824941656" lastFinishedPulling="2025-11-21 20:17:07.215794692 +0000 UTC m=+632.401979736" observedRunningTime="2025-11-21 20:17:07.955663897 +0000 UTC m=+633.141848961" watchObservedRunningTime="2025-11-21 20:17:07.95703624 +0000 UTC m=+633.143221314" Nov 21 20:17:13 crc kubenswrapper[4727]: I1121 20:17:13.131370 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-rttbv" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.712679 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz"] Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.714494 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.718616 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.751057 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz"] Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.886518 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.886585 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zmpg\" (UniqueName: \"kubernetes.io/projected/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-kube-api-access-9zmpg\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.887045 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.906467 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr"] Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.907795 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.918796 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr"] Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.988762 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.988807 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.988850 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.988882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.988905 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zmpg\" (UniqueName: \"kubernetes.io/projected/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-kube-api-access-9zmpg\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.988930 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frxbw\" (UniqueName: \"kubernetes.io/projected/4944f157-e2ee-453c-bac8-aee27615a833-kube-api-access-frxbw\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.989373 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:35 crc kubenswrapper[4727]: I1121 20:17:35.989397 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.006955 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zmpg\" (UniqueName: \"kubernetes.io/projected/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-kube-api-access-9zmpg\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.045136 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.090017 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.090685 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.090776 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frxbw\" (UniqueName: \"kubernetes.io/projected/4944f157-e2ee-453c-bac8-aee27615a833-kube-api-access-frxbw\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.090545 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.091524 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.108039 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frxbw\" (UniqueName: \"kubernetes.io/projected/4944f157-e2ee-453c-bac8-aee27615a833-kube-api-access-frxbw\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.223010 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.243256 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz"] Nov 21 20:17:36 crc kubenswrapper[4727]: I1121 20:17:36.511171 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr"] Nov 21 20:17:36 crc kubenswrapper[4727]: W1121 20:17:36.524993 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4944f157_e2ee_453c_bac8_aee27615a833.slice/crio-dfb2b7138cb297f197bb7dad4abd8fb12f62f62f28fef943a1e2d09f9fa74302 WatchSource:0}: Error finding container dfb2b7138cb297f197bb7dad4abd8fb12f62f62f28fef943a1e2d09f9fa74302: Status 404 returned error can't find the container with id dfb2b7138cb297f197bb7dad4abd8fb12f62f62f28fef943a1e2d09f9fa74302 Nov 21 20:17:37 crc kubenswrapper[4727]: I1121 20:17:37.157528 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" containerID="6df700a2c92185d1928e14f003c29b9cefe1b7b391b50efe6ada2232dc3eda9b" exitCode=0 Nov 21 20:17:37 crc kubenswrapper[4727]: I1121 20:17:37.157626 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" event={"ID":"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4","Type":"ContainerDied","Data":"6df700a2c92185d1928e14f003c29b9cefe1b7b391b50efe6ada2232dc3eda9b"} Nov 21 20:17:37 crc kubenswrapper[4727]: I1121 20:17:37.157689 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" event={"ID":"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4","Type":"ContainerStarted","Data":"fd5931e59ae0e1a11d190fecd0e67ce61a24fdd9408a9db1dc1978aa90d63f22"} Nov 21 20:17:37 crc kubenswrapper[4727]: I1121 20:17:37.159705 4727 generic.go:334] "Generic (PLEG): container finished" podID="4944f157-e2ee-453c-bac8-aee27615a833" containerID="84630ede17fc80430bd5ae4beb32cb7ac70e6f8ec1c703c54fc055c3a9abe0fa" exitCode=0 Nov 21 20:17:37 crc kubenswrapper[4727]: I1121 20:17:37.159734 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" event={"ID":"4944f157-e2ee-453c-bac8-aee27615a833","Type":"ContainerDied","Data":"84630ede17fc80430bd5ae4beb32cb7ac70e6f8ec1c703c54fc055c3a9abe0fa"} Nov 21 20:17:37 crc kubenswrapper[4727]: I1121 20:17:37.159761 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" event={"ID":"4944f157-e2ee-453c-bac8-aee27615a833","Type":"ContainerStarted","Data":"dfb2b7138cb297f197bb7dad4abd8fb12f62f62f28fef943a1e2d09f9fa74302"} Nov 21 20:17:39 crc kubenswrapper[4727]: I1121 20:17:39.174540 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" containerID="5765e0682cb9d1d8f144a960c93d725d9d9001fa1edd40ec6b283f9c841b42cf" exitCode=0 Nov 21 20:17:39 crc kubenswrapper[4727]: I1121 20:17:39.174605 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" event={"ID":"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4","Type":"ContainerDied","Data":"5765e0682cb9d1d8f144a960c93d725d9d9001fa1edd40ec6b283f9c841b42cf"} Nov 21 20:17:39 crc kubenswrapper[4727]: I1121 20:17:39.178234 4727 generic.go:334] "Generic (PLEG): container finished" podID="4944f157-e2ee-453c-bac8-aee27615a833" containerID="96b2101924ff691bae01f03e35ec71b1c0c953a7e563f4bf6df0a694f105169d" exitCode=0 Nov 21 20:17:39 crc kubenswrapper[4727]: I1121 20:17:39.178293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" event={"ID":"4944f157-e2ee-453c-bac8-aee27615a833","Type":"ContainerDied","Data":"96b2101924ff691bae01f03e35ec71b1c0c953a7e563f4bf6df0a694f105169d"} Nov 21 20:17:41 crc kubenswrapper[4727]: I1121 20:17:41.204047 4727 generic.go:334] "Generic (PLEG): container finished" podID="d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" containerID="db7837170bfa5560fc74b567610078d10e3e4c78c421e6bec28eda7be33ba4bc" exitCode=0 Nov 21 20:17:41 crc kubenswrapper[4727]: I1121 20:17:41.204169 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" event={"ID":"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4","Type":"ContainerDied","Data":"db7837170bfa5560fc74b567610078d10e3e4c78c421e6bec28eda7be33ba4bc"} Nov 21 20:17:41 crc kubenswrapper[4727]: I1121 20:17:41.206688 4727 generic.go:334] "Generic (PLEG): container finished" podID="4944f157-e2ee-453c-bac8-aee27615a833" containerID="32b9b2655512ea63f2fac29527530c74e800a01e215259df6e8ee8a3d7eb8dd0" exitCode=0 Nov 21 20:17:41 crc kubenswrapper[4727]: I1121 20:17:41.206729 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" event={"ID":"4944f157-e2ee-453c-bac8-aee27615a833","Type":"ContainerDied","Data":"32b9b2655512ea63f2fac29527530c74e800a01e215259df6e8ee8a3d7eb8dd0"} Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.475186 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.482912 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.592584 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-bundle\") pod \"4944f157-e2ee-453c-bac8-aee27615a833\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.592624 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frxbw\" (UniqueName: \"kubernetes.io/projected/4944f157-e2ee-453c-bac8-aee27615a833-kube-api-access-frxbw\") pod \"4944f157-e2ee-453c-bac8-aee27615a833\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.592740 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-util\") pod \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.592799 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zmpg\" (UniqueName: \"kubernetes.io/projected/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-kube-api-access-9zmpg\") pod \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.592822 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-bundle\") pod \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\" (UID: \"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4\") " Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.592858 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-util\") pod \"4944f157-e2ee-453c-bac8-aee27615a833\" (UID: \"4944f157-e2ee-453c-bac8-aee27615a833\") " Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.594225 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-bundle" (OuterVolumeSpecName: "bundle") pod "4944f157-e2ee-453c-bac8-aee27615a833" (UID: "4944f157-e2ee-453c-bac8-aee27615a833"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.594524 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-bundle" (OuterVolumeSpecName: "bundle") pod "d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" (UID: "d9e94391-8dbd-4d4b-b330-dd6fc13f91c4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.599021 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-kube-api-access-9zmpg" (OuterVolumeSpecName: "kube-api-access-9zmpg") pod "d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" (UID: "d9e94391-8dbd-4d4b-b330-dd6fc13f91c4"). InnerVolumeSpecName "kube-api-access-9zmpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.599067 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4944f157-e2ee-453c-bac8-aee27615a833-kube-api-access-frxbw" (OuterVolumeSpecName: "kube-api-access-frxbw") pod "4944f157-e2ee-453c-bac8-aee27615a833" (UID: "4944f157-e2ee-453c-bac8-aee27615a833"). InnerVolumeSpecName "kube-api-access-frxbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.695149 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zmpg\" (UniqueName: \"kubernetes.io/projected/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-kube-api-access-9zmpg\") on node \"crc\" DevicePath \"\"" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.695185 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.695196 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.695206 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frxbw\" (UniqueName: \"kubernetes.io/projected/4944f157-e2ee-453c-bac8-aee27615a833-kube-api-access-frxbw\") on node \"crc\" DevicePath \"\"" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.711816 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-util" (OuterVolumeSpecName: "util") pod "d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" (UID: "d9e94391-8dbd-4d4b-b330-dd6fc13f91c4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.776594 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-util" (OuterVolumeSpecName: "util") pod "4944f157-e2ee-453c-bac8-aee27615a833" (UID: "4944f157-e2ee-453c-bac8-aee27615a833"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.796830 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9e94391-8dbd-4d4b-b330-dd6fc13f91c4-util\") on node \"crc\" DevicePath \"\"" Nov 21 20:17:42 crc kubenswrapper[4727]: I1121 20:17:42.796869 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4944f157-e2ee-453c-bac8-aee27615a833-util\") on node \"crc\" DevicePath \"\"" Nov 21 20:17:43 crc kubenswrapper[4727]: I1121 20:17:43.220982 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" event={"ID":"d9e94391-8dbd-4d4b-b330-dd6fc13f91c4","Type":"ContainerDied","Data":"fd5931e59ae0e1a11d190fecd0e67ce61a24fdd9408a9db1dc1978aa90d63f22"} Nov 21 20:17:43 crc kubenswrapper[4727]: I1121 20:17:43.221073 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd5931e59ae0e1a11d190fecd0e67ce61a24fdd9408a9db1dc1978aa90d63f22" Nov 21 20:17:43 crc kubenswrapper[4727]: I1121 20:17:43.221001 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz" Nov 21 20:17:43 crc kubenswrapper[4727]: I1121 20:17:43.223453 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" event={"ID":"4944f157-e2ee-453c-bac8-aee27615a833","Type":"ContainerDied","Data":"dfb2b7138cb297f197bb7dad4abd8fb12f62f62f28fef943a1e2d09f9fa74302"} Nov 21 20:17:43 crc kubenswrapper[4727]: I1121 20:17:43.223496 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb2b7138cb297f197bb7dad4abd8fb12f62f62f28fef943a1e2d09f9fa74302" Nov 21 20:17:43 crc kubenswrapper[4727]: I1121 20:17:43.223505 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.320012 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r"] Nov 21 20:17:51 crc kubenswrapper[4727]: E1121 20:17:51.320498 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4944f157-e2ee-453c-bac8-aee27615a833" containerName="pull" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.320511 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4944f157-e2ee-453c-bac8-aee27615a833" containerName="pull" Nov 21 20:17:51 crc kubenswrapper[4727]: E1121 20:17:51.320523 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" containerName="pull" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.320530 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" containerName="pull" Nov 21 20:17:51 crc kubenswrapper[4727]: E1121 20:17:51.320544 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4944f157-e2ee-453c-bac8-aee27615a833" containerName="extract" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.320551 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4944f157-e2ee-453c-bac8-aee27615a833" containerName="extract" Nov 21 20:17:51 crc kubenswrapper[4727]: E1121 20:17:51.320564 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" containerName="extract" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.320572 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" containerName="extract" Nov 21 20:17:51 crc kubenswrapper[4727]: E1121 20:17:51.320589 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" containerName="util" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.320595 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" containerName="util" Nov 21 20:17:51 crc kubenswrapper[4727]: E1121 20:17:51.320605 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4944f157-e2ee-453c-bac8-aee27615a833" containerName="util" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.320611 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4944f157-e2ee-453c-bac8-aee27615a833" containerName="util" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.320717 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9e94391-8dbd-4d4b-b330-dd6fc13f91c4" containerName="extract" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.320731 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4944f157-e2ee-453c-bac8-aee27615a833" containerName="extract" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.321432 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.323377 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.323711 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.324008 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.324080 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-g6j5g" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.325742 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.326929 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.335077 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r"] Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.420297 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/16b45601-b011-407e-bbc5-3f92b770b3a2-manager-config\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.420355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16b45601-b011-407e-bbc5-3f92b770b3a2-apiservice-cert\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.420386 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/16b45601-b011-407e-bbc5-3f92b770b3a2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.420506 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsdl\" (UniqueName: \"kubernetes.io/projected/16b45601-b011-407e-bbc5-3f92b770b3a2-kube-api-access-nmsdl\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.420684 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16b45601-b011-407e-bbc5-3f92b770b3a2-webhook-cert\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.521710 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/16b45601-b011-407e-bbc5-3f92b770b3a2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.521772 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmsdl\" (UniqueName: \"kubernetes.io/projected/16b45601-b011-407e-bbc5-3f92b770b3a2-kube-api-access-nmsdl\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.521835 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16b45601-b011-407e-bbc5-3f92b770b3a2-webhook-cert\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.521874 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/16b45601-b011-407e-bbc5-3f92b770b3a2-manager-config\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.521892 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16b45601-b011-407e-bbc5-3f92b770b3a2-apiservice-cert\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.523160 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/16b45601-b011-407e-bbc5-3f92b770b3a2-manager-config\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.529694 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16b45601-b011-407e-bbc5-3f92b770b3a2-webhook-cert\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.530672 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/16b45601-b011-407e-bbc5-3f92b770b3a2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.532454 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/16b45601-b011-407e-bbc5-3f92b770b3a2-apiservice-cert\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.544649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmsdl\" (UniqueName: \"kubernetes.io/projected/16b45601-b011-407e-bbc5-3f92b770b3a2-kube-api-access-nmsdl\") pod \"loki-operator-controller-manager-74d98576bd-k5q2r\" (UID: \"16b45601-b011-407e-bbc5-3f92b770b3a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.639026 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:17:51 crc kubenswrapper[4727]: I1121 20:17:51.871999 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r"] Nov 21 20:17:52 crc kubenswrapper[4727]: I1121 20:17:52.300329 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" event={"ID":"16b45601-b011-407e-bbc5-3f92b770b3a2","Type":"ContainerStarted","Data":"959804b4c372f0c8ef78308e99d645ffa20fac90b6684dfb17a8cf63cfdb3681"} Nov 21 20:17:56 crc kubenswrapper[4727]: I1121 20:17:56.334246 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" event={"ID":"16b45601-b011-407e-bbc5-3f92b770b3a2","Type":"ContainerStarted","Data":"2d57d94dc4ff49ab227b0b92555042cbc43eacb5c4ede4d34db4a9388632fa98"} Nov 21 20:17:56 crc kubenswrapper[4727]: I1121 20:17:56.758350 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-nlpsm"] Nov 21 20:17:56 crc kubenswrapper[4727]: I1121 20:17:56.759196 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-nlpsm" Nov 21 20:17:56 crc kubenswrapper[4727]: I1121 20:17:56.761144 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 21 20:17:56 crc kubenswrapper[4727]: I1121 20:17:56.761253 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-xrgj8" Nov 21 20:17:56 crc kubenswrapper[4727]: I1121 20:17:56.761408 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 21 20:17:56 crc kubenswrapper[4727]: I1121 20:17:56.771754 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-nlpsm"] Nov 21 20:17:56 crc kubenswrapper[4727]: I1121 20:17:56.797972 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7bh2\" (UniqueName: \"kubernetes.io/projected/f920c0a3-f99c-4def-bc43-b4734872bba2-kube-api-access-t7bh2\") pod \"cluster-logging-operator-ff9846bd-nlpsm\" (UID: \"f920c0a3-f99c-4def-bc43-b4734872bba2\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-nlpsm" Nov 21 20:17:56 crc kubenswrapper[4727]: I1121 20:17:56.899572 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7bh2\" (UniqueName: \"kubernetes.io/projected/f920c0a3-f99c-4def-bc43-b4734872bba2-kube-api-access-t7bh2\") pod \"cluster-logging-operator-ff9846bd-nlpsm\" (UID: \"f920c0a3-f99c-4def-bc43-b4734872bba2\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-nlpsm" Nov 21 20:17:56 crc kubenswrapper[4727]: I1121 20:17:56.918561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7bh2\" (UniqueName: \"kubernetes.io/projected/f920c0a3-f99c-4def-bc43-b4734872bba2-kube-api-access-t7bh2\") pod \"cluster-logging-operator-ff9846bd-nlpsm\" (UID: \"f920c0a3-f99c-4def-bc43-b4734872bba2\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-nlpsm" Nov 21 20:17:57 crc kubenswrapper[4727]: I1121 20:17:57.078680 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-nlpsm" Nov 21 20:17:57 crc kubenswrapper[4727]: I1121 20:17:57.516082 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-nlpsm"] Nov 21 20:17:58 crc kubenswrapper[4727]: I1121 20:17:58.352387 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-nlpsm" event={"ID":"f920c0a3-f99c-4def-bc43-b4734872bba2","Type":"ContainerStarted","Data":"49db0c9f8c4722f88467ae626756ba97bff5ae582a4c605077f45ced2ee0d781"} Nov 21 20:18:05 crc kubenswrapper[4727]: I1121 20:18:05.411776 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-nlpsm" event={"ID":"f920c0a3-f99c-4def-bc43-b4734872bba2","Type":"ContainerStarted","Data":"8beb9600c963a6175d4d3b99c0ebe7745701ea293a70d1d2aad51378f2595232"} Nov 21 20:18:05 crc kubenswrapper[4727]: I1121 20:18:05.415527 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" event={"ID":"16b45601-b011-407e-bbc5-3f92b770b3a2","Type":"ContainerStarted","Data":"bb372c7a64d9da736448b6a803443507f193ba75016fb2c384efbbb56756050c"} Nov 21 20:18:05 crc kubenswrapper[4727]: I1121 20:18:05.416055 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:18:05 crc kubenswrapper[4727]: I1121 20:18:05.418194 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" Nov 21 20:18:05 crc kubenswrapper[4727]: I1121 20:18:05.434918 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-nlpsm" podStartSLOduration=2.217834315 podStartE2EDuration="9.434896363s" podCreationTimestamp="2025-11-21 20:17:56 +0000 UTC" firstStartedPulling="2025-11-21 20:17:57.540695187 +0000 UTC m=+682.726880241" lastFinishedPulling="2025-11-21 20:18:04.757757245 +0000 UTC m=+689.943942289" observedRunningTime="2025-11-21 20:18:05.42814704 +0000 UTC m=+690.614332094" watchObservedRunningTime="2025-11-21 20:18:05.434896363 +0000 UTC m=+690.621081407" Nov 21 20:18:05 crc kubenswrapper[4727]: I1121 20:18:05.471672 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-74d98576bd-k5q2r" podStartSLOduration=1.602062884 podStartE2EDuration="14.471651497s" podCreationTimestamp="2025-11-21 20:17:51 +0000 UTC" firstStartedPulling="2025-11-21 20:17:51.894403063 +0000 UTC m=+677.080588107" lastFinishedPulling="2025-11-21 20:18:04.763991676 +0000 UTC m=+689.950176720" observedRunningTime="2025-11-21 20:18:05.467324693 +0000 UTC m=+690.653509737" watchObservedRunningTime="2025-11-21 20:18:05.471651497 +0000 UTC m=+690.657836541" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.055351 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.056778 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.059275 4727 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-glzmr" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.059622 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.059797 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.071236 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.150717 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhmwk\" (UniqueName: \"kubernetes.io/projected/ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1-kube-api-access-qhmwk\") pod \"minio\" (UID: \"ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1\") " pod="minio-dev/minio" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.151247 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f98570c7-ad6f-4f1d-a3ce-4fdac9c6cbea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f98570c7-ad6f-4f1d-a3ce-4fdac9c6cbea\") pod \"minio\" (UID: \"ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1\") " pod="minio-dev/minio" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.253082 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f98570c7-ad6f-4f1d-a3ce-4fdac9c6cbea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f98570c7-ad6f-4f1d-a3ce-4fdac9c6cbea\") pod \"minio\" (UID: \"ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1\") " pod="minio-dev/minio" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.253230 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhmwk\" (UniqueName: \"kubernetes.io/projected/ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1-kube-api-access-qhmwk\") pod \"minio\" (UID: \"ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1\") " pod="minio-dev/minio" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.256805 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.256862 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f98570c7-ad6f-4f1d-a3ce-4fdac9c6cbea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f98570c7-ad6f-4f1d-a3ce-4fdac9c6cbea\") pod \"minio\" (UID: \"ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/43fffc4176c8b17f09bb294bb5e2816ac92d935d4847747a610cec345bbd08c6/globalmount\"" pod="minio-dev/minio" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.275622 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhmwk\" (UniqueName: \"kubernetes.io/projected/ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1-kube-api-access-qhmwk\") pod \"minio\" (UID: \"ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1\") " pod="minio-dev/minio" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.285187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f98570c7-ad6f-4f1d-a3ce-4fdac9c6cbea\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f98570c7-ad6f-4f1d-a3ce-4fdac9c6cbea\") pod \"minio\" (UID: \"ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1\") " pod="minio-dev/minio" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.382115 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 21 20:18:12 crc kubenswrapper[4727]: I1121 20:18:12.602917 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 21 20:18:12 crc kubenswrapper[4727]: W1121 20:18:12.609997 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce587f8f_4ea1_4f9b_9ba0_ed5cd9907da1.slice/crio-eb115a973b4166630a5a68fb8cacd92185537db6cc33c329c2f050d2a96748a0 WatchSource:0}: Error finding container eb115a973b4166630a5a68fb8cacd92185537db6cc33c329c2f050d2a96748a0: Status 404 returned error can't find the container with id eb115a973b4166630a5a68fb8cacd92185537db6cc33c329c2f050d2a96748a0 Nov 21 20:18:13 crc kubenswrapper[4727]: I1121 20:18:13.471027 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1","Type":"ContainerStarted","Data":"eb115a973b4166630a5a68fb8cacd92185537db6cc33c329c2f050d2a96748a0"} Nov 21 20:18:16 crc kubenswrapper[4727]: I1121 20:18:16.490845 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ce587f8f-4ea1-4f9b-9ba0-ed5cd9907da1","Type":"ContainerStarted","Data":"b7bbd7528a39c92c392e104afb162aa8f55f83309fe764c942cdfac6f824b2ab"} Nov 21 20:18:16 crc kubenswrapper[4727]: I1121 20:18:16.518390 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.083123389 podStartE2EDuration="7.518344837s" podCreationTimestamp="2025-11-21 20:18:09 +0000 UTC" firstStartedPulling="2025-11-21 20:18:12.612457781 +0000 UTC m=+697.798642825" lastFinishedPulling="2025-11-21 20:18:16.047679229 +0000 UTC m=+701.233864273" observedRunningTime="2025-11-21 20:18:16.508400988 +0000 UTC m=+701.694586042" watchObservedRunningTime="2025-11-21 20:18:16.518344837 +0000 UTC m=+701.704529921" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.505205 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-slmrz"] Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.506444 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.508281 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.509528 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.510035 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-sdw9q" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.510144 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.512426 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.539678 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-slmrz"] Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.614177 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.614290 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpjts\" (UniqueName: \"kubernetes.io/projected/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-kube-api-access-tpjts\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.614443 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.614527 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.614598 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-config\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.686926 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-n85sz"] Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.687891 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.691488 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.691713 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.693454 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.700355 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-n85sz"] Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.715773 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.715831 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpjts\" (UniqueName: \"kubernetes.io/projected/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-kube-api-access-tpjts\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.716374 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.716785 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.717224 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.717332 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-config\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.718222 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-config\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.744829 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.749109 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.750393 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpjts\" (UniqueName: \"kubernetes.io/projected/0ca7d843-5fcf-4fcb-b111-9c657f58b54f-kube-api-access-tpjts\") pod \"logging-loki-distributor-76cc67bf56-slmrz\" (UID: \"0ca7d843-5fcf-4fcb-b111-9c657f58b54f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.787436 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5"] Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.788295 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.792311 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.792546 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.808618 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5"] Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.820504 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.820588 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.820632 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5z9\" (UniqueName: \"kubernetes.io/projected/2bfd5755-ad8a-47da-86b7-020881abeeec-kube-api-access-9h5z9\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.820668 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bfd5755-ad8a-47da-86b7-020881abeeec-config\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.820687 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.820712 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.850702 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.911099 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j"] Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.916310 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.930158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bfd5755-ad8a-47da-86b7-020881abeeec-config\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.931196 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.931397 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.931577 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.931670 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.931768 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933128 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933296 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bfd5755-ad8a-47da-86b7-020881abeeec-config\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933396 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933468 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933518 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933548 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933593 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933626 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkctk\" (UniqueName: \"kubernetes.io/projected/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-kube-api-access-kkctk\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933657 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h5z9\" (UniqueName: \"kubernetes.io/projected/2bfd5755-ad8a-47da-86b7-020881abeeec-kube-api-access-9h5z9\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.933684 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-config\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.943908 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.944199 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.954040 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-59cfccf4c6-j2284"] Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.955567 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.958433 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-c94st" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.958908 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.960564 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/2bfd5755-ad8a-47da-86b7-020881abeeec-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.967747 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h5z9\" (UniqueName: \"kubernetes.io/projected/2bfd5755-ad8a-47da-86b7-020881abeeec-kube-api-access-9h5z9\") pod \"logging-loki-querier-5895d59bb8-n85sz\" (UID: \"2bfd5755-ad8a-47da-86b7-020881abeeec\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.969139 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j"] Nov 21 20:18:20 crc kubenswrapper[4727]: I1121 20:18:20.997001 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-59cfccf4c6-j2284"] Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.008536 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038229 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-lokistack-gateway\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038298 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-rbac\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038326 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ml7g\" (UniqueName: \"kubernetes.io/projected/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-kube-api-access-8ml7g\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038355 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038529 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038588 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-logging-loki-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038617 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-tls-secret\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038640 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkctk\" (UniqueName: \"kubernetes.io/projected/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-kube-api-access-kkctk\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038688 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038713 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-config\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.038742 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-tenants\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.040882 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.044632 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.056249 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-config\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.064659 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkctk\" (UniqueName: \"kubernetes.io/projected/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-kube-api-access-kkctk\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.070645 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-spcl5\" (UID: \"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.115429 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.150541 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-tls-secret\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151052 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/441ae22e-7af1-4013-90ef-880b7ba0ce0e-tls-secret\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151120 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151145 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-lokistack-gateway\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151165 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/441ae22e-7af1-4013-90ef-880b7ba0ce0e-tenants\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151181 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/441ae22e-7af1-4013-90ef-880b7ba0ce0e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151217 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-tenants\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151235 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-lokistack-gateway\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-rbac\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151323 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ml7g\" (UniqueName: \"kubernetes.io/projected/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-kube-api-access-8ml7g\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151347 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151370 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4s4b\" (UniqueName: \"kubernetes.io/projected/441ae22e-7af1-4013-90ef-880b7ba0ce0e-kube-api-access-q4s4b\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151392 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151410 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-rbac\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.151438 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-logging-loki-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.152331 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-logging-loki-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: E1121 20:18:21.150826 4727 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 21 20:18:21 crc kubenswrapper[4727]: E1121 20:18:21.152414 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-tls-secret podName:0b5a9db0-f734-4201-87a8-60f0bcbb14ec nodeName:}" failed. No retries permitted until 2025-11-21 20:18:21.652395028 +0000 UTC m=+706.838580072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-tls-secret") pod "logging-loki-gateway-59cfccf4c6-mzg5j" (UID: "0b5a9db0-f734-4201-87a8-60f0bcbb14ec") : secret "logging-loki-gateway-http" not found Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.154829 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-rbac\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.154903 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.156464 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-lokistack-gateway\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.159392 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.159516 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-tenants\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.174064 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ml7g\" (UniqueName: \"kubernetes.io/projected/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-kube-api-access-8ml7g\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.253357 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/441ae22e-7af1-4013-90ef-880b7ba0ce0e-tls-secret\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.253411 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-lokistack-gateway\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.253429 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/441ae22e-7af1-4013-90ef-880b7ba0ce0e-tenants\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.253447 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/441ae22e-7af1-4013-90ef-880b7ba0ce0e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.253480 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.253550 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4s4b\" (UniqueName: \"kubernetes.io/projected/441ae22e-7af1-4013-90ef-880b7ba0ce0e-kube-api-access-q4s4b\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.253570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.253591 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-rbac\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.254540 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-rbac\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.255338 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-lokistack-gateway\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.255641 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.256201 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/441ae22e-7af1-4013-90ef-880b7ba0ce0e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.266847 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/441ae22e-7af1-4013-90ef-880b7ba0ce0e-tls-secret\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.274232 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/441ae22e-7af1-4013-90ef-880b7ba0ce0e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.275786 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/441ae22e-7af1-4013-90ef-880b7ba0ce0e-tenants\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.283057 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4s4b\" (UniqueName: \"kubernetes.io/projected/441ae22e-7af1-4013-90ef-880b7ba0ce0e-kube-api-access-q4s4b\") pod \"logging-loki-gateway-59cfccf4c6-j2284\" (UID: \"441ae22e-7af1-4013-90ef-880b7ba0ce0e\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.334099 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.397395 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-slmrz"] Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.490444 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-n85sz"] Nov 21 20:18:21 crc kubenswrapper[4727]: W1121 20:18:21.517393 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bfd5755_ad8a_47da_86b7_020881abeeec.slice/crio-21204cf3f95a8bfac069c9c3c91e2d556246b319983ff374a0966df6252b5264 WatchSource:0}: Error finding container 21204cf3f95a8bfac069c9c3c91e2d556246b319983ff374a0966df6252b5264: Status 404 returned error can't find the container with id 21204cf3f95a8bfac069c9c3c91e2d556246b319983ff374a0966df6252b5264 Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.528848 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" event={"ID":"0ca7d843-5fcf-4fcb-b111-9c657f58b54f","Type":"ContainerStarted","Data":"71f388102f2f0979345bc187ad0cdac1fa1f379055bfbbabf511c749221aaffc"} Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.607727 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5"] Nov 21 20:18:21 crc kubenswrapper[4727]: W1121 20:18:21.619407 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda36d2c90_8f52_48b2_a6ea_b774e0e7d0a7.slice/crio-b6673386cb42f857b6de81fe68f7346e824533f41790cf4e90da5924f55e05de WatchSource:0}: Error finding container b6673386cb42f857b6de81fe68f7346e824533f41790cf4e90da5924f55e05de: Status 404 returned error can't find the container with id b6673386cb42f857b6de81fe68f7346e824533f41790cf4e90da5924f55e05de Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.657329 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.658226 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.661937 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-tls-secret\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.667547 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0b5a9db0-f734-4201-87a8-60f0bcbb14ec-tls-secret\") pod \"logging-loki-gateway-59cfccf4c6-mzg5j\" (UID: \"0b5a9db0-f734-4201-87a8-60f0bcbb14ec\") " pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.676348 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.676840 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.692858 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.754852 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.755946 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.759322 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.759502 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.765688 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc313c47-e815-4c94-b46b-51876e49ec0a-config\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.765751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v659\" (UniqueName: \"kubernetes.io/projected/fc313c47-e815-4c94-b46b-51876e49ec0a-kube-api-access-7v659\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.765794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.765874 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.765913 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.765981 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-13113ef5-2f3e-42e4-a60b-b9d8857a3547\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13113ef5-2f3e-42e4-a60b-b9d8857a3547\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.766011 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-143b74f7-dbb2-478b-ac64-a93eff038a4d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-143b74f7-dbb2-478b-ac64-a93eff038a4d\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.766082 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.768363 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.857122 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.858167 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.864592 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.872951 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874249 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874304 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874524 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874557 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e392ee-a6ac-435f-92dc-87a6e27bf293-config\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874578 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874596 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874617 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-13113ef5-2f3e-42e4-a60b-b9d8857a3547\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13113ef5-2f3e-42e4-a60b-b9d8857a3547\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874643 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-143b74f7-dbb2-478b-ac64-a93eff038a4d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-143b74f7-dbb2-478b-ac64-a93eff038a4d\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874711 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-baa172bf-3cbe-49e2-9389-e5b1031b52e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa172bf-3cbe-49e2-9389-e5b1031b52e9\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874734 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874755 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874779 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc313c47-e815-4c94-b46b-51876e49ec0a-config\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874801 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxqcm\" (UniqueName: \"kubernetes.io/projected/f5e392ee-a6ac-435f-92dc-87a6e27bf293-kube-api-access-dxqcm\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.875849 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v659\" (UniqueName: \"kubernetes.io/projected/fc313c47-e815-4c94-b46b-51876e49ec0a-kube-api-access-7v659\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.875882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.874602 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.876986 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.877609 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc313c47-e815-4c94-b46b-51876e49ec0a-config\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.881051 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-59cfccf4c6-j2284"] Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.896213 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.896862 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.896889 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-13113ef5-2f3e-42e4-a60b-b9d8857a3547\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13113ef5-2f3e-42e4-a60b-b9d8857a3547\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/80e85a731b1c48bef4c2cffe329b52399b3a4bd07d9e9f6e9d5a2b2aa6321256/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.896862 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.896988 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-143b74f7-dbb2-478b-ac64-a93eff038a4d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-143b74f7-dbb2-478b-ac64-a93eff038a4d\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1314c61609cd42b8e18b24d42a4ebf20b6fb413bc3d1b458ba64be332da0351e/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.898697 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.900517 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/fc313c47-e815-4c94-b46b-51876e49ec0a-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.904078 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.913341 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v659\" (UniqueName: \"kubernetes.io/projected/fc313c47-e815-4c94-b46b-51876e49ec0a-kube-api-access-7v659\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.964551 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-143b74f7-dbb2-478b-ac64-a93eff038a4d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-143b74f7-dbb2-478b-ac64-a93eff038a4d\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.976324 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-13113ef5-2f3e-42e4-a60b-b9d8857a3547\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13113ef5-2f3e-42e4-a60b-b9d8857a3547\") pod \"logging-loki-ingester-0\" (UID: \"fc313c47-e815-4c94-b46b-51876e49ec0a\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978192 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a3e02785-c739-4efb-bf62-aff6f72cd1eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3e02785-c739-4efb-bf62-aff6f72cd1eb\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978229 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978261 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxqcm\" (UniqueName: \"kubernetes.io/projected/f5e392ee-a6ac-435f-92dc-87a6e27bf293-kube-api-access-dxqcm\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978287 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315b3e23-db88-454e-a80c-66f53fbe1c5b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978544 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978598 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978635 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978669 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-baa172bf-3cbe-49e2-9389-e5b1031b52e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa172bf-3cbe-49e2-9389-e5b1031b52e9\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978699 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978753 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e392ee-a6ac-435f-92dc-87a6e27bf293-config\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978781 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978810 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqclc\" (UniqueName: \"kubernetes.io/projected/315b3e23-db88-454e-a80c-66f53fbe1c5b-kube-api-access-vqclc\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.978839 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.981023 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.984706 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5e392ee-a6ac-435f-92dc-87a6e27bf293-config\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.985466 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.989792 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.998634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/f5e392ee-a6ac-435f-92dc-87a6e27bf293-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.998851 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 20:18:21 crc kubenswrapper[4727]: I1121 20:18:21.998921 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-baa172bf-3cbe-49e2-9389-e5b1031b52e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa172bf-3cbe-49e2-9389-e5b1031b52e9\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/82c5d824e3a847db75fb21f66b2c89088d88689a7196ac08a4c0eda2e8d52b47/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.012007 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxqcm\" (UniqueName: \"kubernetes.io/projected/f5e392ee-a6ac-435f-92dc-87a6e27bf293-kube-api-access-dxqcm\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.042497 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-baa172bf-3cbe-49e2-9389-e5b1031b52e9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-baa172bf-3cbe-49e2-9389-e5b1031b52e9\") pod \"logging-loki-compactor-0\" (UID: \"f5e392ee-a6ac-435f-92dc-87a6e27bf293\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.080424 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.080490 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a3e02785-c739-4efb-bf62-aff6f72cd1eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3e02785-c739-4efb-bf62-aff6f72cd1eb\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.080529 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.080568 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315b3e23-db88-454e-a80c-66f53fbe1c5b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.080645 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.080689 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.080767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqclc\" (UniqueName: \"kubernetes.io/projected/315b3e23-db88-454e-a80c-66f53fbe1c5b-kube-api-access-vqclc\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.085493 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315b3e23-db88-454e-a80c-66f53fbe1c5b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.086922 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.086942 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.089187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.097816 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.099350 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a3e02785-c739-4efb-bf62-aff6f72cd1eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3e02785-c739-4efb-bf62-aff6f72cd1eb\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a9f2932f8fd16a6aff008e543d253740823750faee73ad71622f2ba42b61ae7b/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.100087 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/315b3e23-db88-454e-a80c-66f53fbe1c5b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.104290 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqclc\" (UniqueName: \"kubernetes.io/projected/315b3e23-db88-454e-a80c-66f53fbe1c5b-kube-api-access-vqclc\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.113241 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.126476 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a3e02785-c739-4efb-bf62-aff6f72cd1eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3e02785-c739-4efb-bf62-aff6f72cd1eb\") pod \"logging-loki-index-gateway-0\" (UID: \"315b3e23-db88-454e-a80c-66f53fbe1c5b\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.185357 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j"] Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.190759 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.277497 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.384231 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 21 20:18:22 crc kubenswrapper[4727]: W1121 20:18:22.393410 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5e392ee_a6ac_435f_92dc_87a6e27bf293.slice/crio-affacb9e29db8fe52a02921f844f8d448410db8d34cf47ac773e752692a353b5 WatchSource:0}: Error finding container affacb9e29db8fe52a02921f844f8d448410db8d34cf47ac773e752692a353b5: Status 404 returned error can't find the container with id affacb9e29db8fe52a02921f844f8d448410db8d34cf47ac773e752692a353b5 Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.535505 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.537810 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" event={"ID":"441ae22e-7af1-4013-90ef-880b7ba0ce0e","Type":"ContainerStarted","Data":"50af99ff64acc2400eaaf5b3a2c9843994d3f174f4a19db74cf2eb07600bc673"} Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.540246 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" event={"ID":"2bfd5755-ad8a-47da-86b7-020881abeeec","Type":"ContainerStarted","Data":"21204cf3f95a8bfac069c9c3c91e2d556246b319983ff374a0966df6252b5264"} Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.541376 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" event={"ID":"0b5a9db0-f734-4201-87a8-60f0bcbb14ec","Type":"ContainerStarted","Data":"ae492fe7782d449bb8148ce4c6602d9d0ef565ae78cb84dcbde368c3eb4a7e85"} Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.542187 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" event={"ID":"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7","Type":"ContainerStarted","Data":"b6673386cb42f857b6de81fe68f7346e824533f41790cf4e90da5924f55e05de"} Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.543566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"f5e392ee-a6ac-435f-92dc-87a6e27bf293","Type":"ContainerStarted","Data":"affacb9e29db8fe52a02921f844f8d448410db8d34cf47ac773e752692a353b5"} Nov 21 20:18:22 crc kubenswrapper[4727]: W1121 20:18:22.547899 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc313c47_e815_4c94_b46b_51876e49ec0a.slice/crio-0af69aa722c91d25e6d5d5be86a9b6ee3e3ed1f7671ad2a13d1de8ee83dac3ec WatchSource:0}: Error finding container 0af69aa722c91d25e6d5d5be86a9b6ee3e3ed1f7671ad2a13d1de8ee83dac3ec: Status 404 returned error can't find the container with id 0af69aa722c91d25e6d5d5be86a9b6ee3e3ed1f7671ad2a13d1de8ee83dac3ec Nov 21 20:18:22 crc kubenswrapper[4727]: I1121 20:18:22.736459 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 21 20:18:22 crc kubenswrapper[4727]: W1121 20:18:22.745710 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod315b3e23_db88_454e_a80c_66f53fbe1c5b.slice/crio-bdde7fa7e42386aceceacc6bac1df3698d16347ff76da0f89f4026736f8a0d4a WatchSource:0}: Error finding container bdde7fa7e42386aceceacc6bac1df3698d16347ff76da0f89f4026736f8a0d4a: Status 404 returned error can't find the container with id bdde7fa7e42386aceceacc6bac1df3698d16347ff76da0f89f4026736f8a0d4a Nov 21 20:18:23 crc kubenswrapper[4727]: I1121 20:18:23.550087 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"fc313c47-e815-4c94-b46b-51876e49ec0a","Type":"ContainerStarted","Data":"0af69aa722c91d25e6d5d5be86a9b6ee3e3ed1f7671ad2a13d1de8ee83dac3ec"} Nov 21 20:18:23 crc kubenswrapper[4727]: I1121 20:18:23.551082 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"315b3e23-db88-454e-a80c-66f53fbe1c5b","Type":"ContainerStarted","Data":"bdde7fa7e42386aceceacc6bac1df3698d16347ff76da0f89f4026736f8a0d4a"} Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.592671 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"f5e392ee-a6ac-435f-92dc-87a6e27bf293","Type":"ContainerStarted","Data":"af12759b2bec15d1d10264707a82e2cdee19cbbd0086b9c8839a7df73fe11df5"} Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.593179 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.594704 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" event={"ID":"441ae22e-7af1-4013-90ef-880b7ba0ce0e","Type":"ContainerStarted","Data":"6e2427b8d387ec7d952ba8111c3ff23b6a21e9d17853ece170ddcd1e58d06247"} Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.596107 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" event={"ID":"2bfd5755-ad8a-47da-86b7-020881abeeec","Type":"ContainerStarted","Data":"ace11b6a389feeeac5b31eb36d7800eb9252e819c5331a7519a70832dbd04283"} Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.596234 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.597505 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" event={"ID":"0b5a9db0-f734-4201-87a8-60f0bcbb14ec","Type":"ContainerStarted","Data":"b9181d47b273f47a6e3db7a25b185c3e741ce95c01f963d62ddd503e339d1666"} Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.598880 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"315b3e23-db88-454e-a80c-66f53fbe1c5b","Type":"ContainerStarted","Data":"571608a2adb2673b393ac75e87a22546fba47cc1e649c33b626644cf2a02fc2f"} Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.598981 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.600281 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"fc313c47-e815-4c94-b46b-51876e49ec0a","Type":"ContainerStarted","Data":"3b986e29c74f8c5fa6548eb81bb12cf096c1e508e4196d5cdb1dcdb6518604a6"} Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.600401 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.602064 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" event={"ID":"a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7","Type":"ContainerStarted","Data":"694e35e2376e6bf06b07a709f7164b3751c97126adfbdc2b4e2b8dcf768377e1"} Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.602664 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.604161 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" event={"ID":"0ca7d843-5fcf-4fcb-b111-9c657f58b54f","Type":"ContainerStarted","Data":"4e72f32db3892c4fb765064fe065910b3f7cf090228e976f3f24936b0828ef93"} Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.604312 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.618466 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.250294181 podStartE2EDuration="8.61843706s" podCreationTimestamp="2025-11-21 20:18:20 +0000 UTC" firstStartedPulling="2025-11-21 20:18:22.396360938 +0000 UTC m=+707.582545982" lastFinishedPulling="2025-11-21 20:18:27.764503817 +0000 UTC m=+712.950688861" observedRunningTime="2025-11-21 20:18:28.614637608 +0000 UTC m=+713.800822672" watchObservedRunningTime="2025-11-21 20:18:28.61843706 +0000 UTC m=+713.804622124" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.639304 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" podStartSLOduration=2.424564306 podStartE2EDuration="8.639282831s" podCreationTimestamp="2025-11-21 20:18:20 +0000 UTC" firstStartedPulling="2025-11-21 20:18:21.531034581 +0000 UTC m=+706.717219625" lastFinishedPulling="2025-11-21 20:18:27.745753116 +0000 UTC m=+712.931938150" observedRunningTime="2025-11-21 20:18:28.636866493 +0000 UTC m=+713.823051547" watchObservedRunningTime="2025-11-21 20:18:28.639282831 +0000 UTC m=+713.825467875" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.659358 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" podStartSLOduration=2.535081597 podStartE2EDuration="8.659333744s" podCreationTimestamp="2025-11-21 20:18:20 +0000 UTC" firstStartedPulling="2025-11-21 20:18:21.622290428 +0000 UTC m=+706.808475472" lastFinishedPulling="2025-11-21 20:18:27.746542575 +0000 UTC m=+712.932727619" observedRunningTime="2025-11-21 20:18:28.656341822 +0000 UTC m=+713.842526876" watchObservedRunningTime="2025-11-21 20:18:28.659333744 +0000 UTC m=+713.845518788" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.673553 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" podStartSLOduration=2.42258134 podStartE2EDuration="8.673526585s" podCreationTimestamp="2025-11-21 20:18:20 +0000 UTC" firstStartedPulling="2025-11-21 20:18:21.414572959 +0000 UTC m=+706.600758003" lastFinishedPulling="2025-11-21 20:18:27.665518204 +0000 UTC m=+712.851703248" observedRunningTime="2025-11-21 20:18:28.672574142 +0000 UTC m=+713.858759206" watchObservedRunningTime="2025-11-21 20:18:28.673526585 +0000 UTC m=+713.859711629" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.696182 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.508029113 podStartE2EDuration="8.696147399s" podCreationTimestamp="2025-11-21 20:18:20 +0000 UTC" firstStartedPulling="2025-11-21 20:18:22.550020856 +0000 UTC m=+707.736205900" lastFinishedPulling="2025-11-21 20:18:27.738139142 +0000 UTC m=+712.924324186" observedRunningTime="2025-11-21 20:18:28.690944455 +0000 UTC m=+713.877129499" watchObservedRunningTime="2025-11-21 20:18:28.696147399 +0000 UTC m=+713.882332443" Nov 21 20:18:28 crc kubenswrapper[4727]: I1121 20:18:28.718745 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.728188962 podStartE2EDuration="8.718727473s" podCreationTimestamp="2025-11-21 20:18:20 +0000 UTC" firstStartedPulling="2025-11-21 20:18:22.747779386 +0000 UTC m=+707.933964430" lastFinishedPulling="2025-11-21 20:18:27.738317837 +0000 UTC m=+712.924502941" observedRunningTime="2025-11-21 20:18:28.716205283 +0000 UTC m=+713.902390337" watchObservedRunningTime="2025-11-21 20:18:28.718727473 +0000 UTC m=+713.904912517" Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.625448 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" event={"ID":"0b5a9db0-f734-4201-87a8-60f0bcbb14ec","Type":"ContainerStarted","Data":"0c45caa8d61ca3ce805d5f68ab22c1df1deeb4e170280ad407783122f7c7653b"} Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.625875 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.625902 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.628211 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" event={"ID":"441ae22e-7af1-4013-90ef-880b7ba0ce0e","Type":"ContainerStarted","Data":"0fd302875016a69654539233bd8dc5fe06d51a743c1dcac2b01e6c0c95a10e8e"} Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.628420 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.628464 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.638354 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.638978 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.641107 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.644701 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.649306 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-mzg5j" podStartSLOduration=2.507536925 podStartE2EDuration="10.649288619s" podCreationTimestamp="2025-11-21 20:18:20 +0000 UTC" firstStartedPulling="2025-11-21 20:18:22.219342417 +0000 UTC m=+707.405527461" lastFinishedPulling="2025-11-21 20:18:30.361094111 +0000 UTC m=+715.547279155" observedRunningTime="2025-11-21 20:18:30.647125234 +0000 UTC m=+715.833310278" watchObservedRunningTime="2025-11-21 20:18:30.649288619 +0000 UTC m=+715.835473663" Nov 21 20:18:30 crc kubenswrapper[4727]: I1121 20:18:30.672281 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-59cfccf4c6-j2284" podStartSLOduration=2.205081228 podStartE2EDuration="10.672262185s" podCreationTimestamp="2025-11-21 20:18:20 +0000 UTC" firstStartedPulling="2025-11-21 20:18:21.898425103 +0000 UTC m=+707.084610137" lastFinishedPulling="2025-11-21 20:18:30.36560605 +0000 UTC m=+715.551791094" observedRunningTime="2025-11-21 20:18:30.668456096 +0000 UTC m=+715.854641130" watchObservedRunningTime="2025-11-21 20:18:30.672262185 +0000 UTC m=+715.858447229" Nov 21 20:18:43 crc kubenswrapper[4727]: I1121 20:18:43.335433 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:18:43 crc kubenswrapper[4727]: I1121 20:18:43.336139 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:18:50 crc kubenswrapper[4727]: I1121 20:18:50.857854 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-slmrz" Nov 21 20:18:51 crc kubenswrapper[4727]: I1121 20:18:51.014852 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-n85sz" Nov 21 20:18:51 crc kubenswrapper[4727]: I1121 20:18:51.120624 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-spcl5" Nov 21 20:18:52 crc kubenswrapper[4727]: I1121 20:18:52.119849 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 21 20:18:52 crc kubenswrapper[4727]: I1121 20:18:52.200850 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 20:18:52 crc kubenswrapper[4727]: I1121 20:18:52.283986 4727 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 21 20:18:52 crc kubenswrapper[4727]: I1121 20:18:52.284038 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="fc313c47-e815-4c94-b46b-51876e49ec0a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.267766 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hx72f"] Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.269031 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" podUID="72783796-a3ab-4a34-9b9e-b4df16dd1cc2" containerName="controller-manager" containerID="cri-o://87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436" gracePeriod=30 Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.358307 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79"] Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.358752 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" podUID="4e951776-970a-49b9-8f34-b2fd129bbc39" containerName="route-controller-manager" containerID="cri-o://dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2" gracePeriod=30 Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.753434 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.760717 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.843332 4727 generic.go:334] "Generic (PLEG): container finished" podID="4e951776-970a-49b9-8f34-b2fd129bbc39" containerID="dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2" exitCode=0 Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.843394 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" event={"ID":"4e951776-970a-49b9-8f34-b2fd129bbc39","Type":"ContainerDied","Data":"dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2"} Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.843418 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" event={"ID":"4e951776-970a-49b9-8f34-b2fd129bbc39","Type":"ContainerDied","Data":"6c9deace7801855e85fe8430bec9029584a354fa9598cedc5f4ca91e4102cc1e"} Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.843420 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.843433 4727 scope.go:117] "RemoveContainer" containerID="dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.846380 4727 generic.go:334] "Generic (PLEG): container finished" podID="72783796-a3ab-4a34-9b9e-b4df16dd1cc2" containerID="87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436" exitCode=0 Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.846420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" event={"ID":"72783796-a3ab-4a34-9b9e-b4df16dd1cc2","Type":"ContainerDied","Data":"87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436"} Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.846443 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" event={"ID":"72783796-a3ab-4a34-9b9e-b4df16dd1cc2","Type":"ContainerDied","Data":"1f2862ce91ec1d546f928bd33317761615fbde4838712663b60a8eaaaf76db42"} Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.846480 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hx72f" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.870951 4727 scope.go:117] "RemoveContainer" containerID="dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2" Nov 21 20:18:58 crc kubenswrapper[4727]: E1121 20:18:58.871643 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2\": container with ID starting with dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2 not found: ID does not exist" containerID="dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.871689 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2"} err="failed to get container status \"dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2\": rpc error: code = NotFound desc = could not find container \"dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2\": container with ID starting with dcdd8c7ee08676ed20b3b1982a1d642f3cb6c3ac5ef039957ee3934bf53b58b2 not found: ID does not exist" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.871716 4727 scope.go:117] "RemoveContainer" containerID="87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.890130 4727 scope.go:117] "RemoveContainer" containerID="87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436" Nov 21 20:18:58 crc kubenswrapper[4727]: E1121 20:18:58.890594 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436\": container with ID starting with 87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436 not found: ID does not exist" containerID="87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.890672 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436"} err="failed to get container status \"87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436\": rpc error: code = NotFound desc = could not find container \"87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436\": container with ID starting with 87b961106a660bd98ed69515eafa0c93052fb25f946f28079974ecbedcef2436 not found: ID does not exist" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.918254 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-config\") pod \"4e951776-970a-49b9-8f34-b2fd129bbc39\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.918335 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-serving-cert\") pod \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.918385 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e951776-970a-49b9-8f34-b2fd129bbc39-serving-cert\") pod \"4e951776-970a-49b9-8f34-b2fd129bbc39\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.918431 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-config\") pod \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.918456 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-proxy-ca-bundles\") pod \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.918556 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-client-ca\") pod \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.918603 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz24f\" (UniqueName: \"kubernetes.io/projected/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-kube-api-access-hz24f\") pod \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\" (UID: \"72783796-a3ab-4a34-9b9e-b4df16dd1cc2\") " Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.918632 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-client-ca\") pod \"4e951776-970a-49b9-8f34-b2fd129bbc39\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.918671 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs7fm\" (UniqueName: \"kubernetes.io/projected/4e951776-970a-49b9-8f34-b2fd129bbc39-kube-api-access-bs7fm\") pod \"4e951776-970a-49b9-8f34-b2fd129bbc39\" (UID: \"4e951776-970a-49b9-8f34-b2fd129bbc39\") " Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.919355 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-config" (OuterVolumeSpecName: "config") pod "4e951776-970a-49b9-8f34-b2fd129bbc39" (UID: "4e951776-970a-49b9-8f34-b2fd129bbc39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.920172 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e951776-970a-49b9-8f34-b2fd129bbc39" (UID: "4e951776-970a-49b9-8f34-b2fd129bbc39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.920187 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "72783796-a3ab-4a34-9b9e-b4df16dd1cc2" (UID: "72783796-a3ab-4a34-9b9e-b4df16dd1cc2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.920246 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-config" (OuterVolumeSpecName: "config") pod "72783796-a3ab-4a34-9b9e-b4df16dd1cc2" (UID: "72783796-a3ab-4a34-9b9e-b4df16dd1cc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.920302 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-client-ca" (OuterVolumeSpecName: "client-ca") pod "72783796-a3ab-4a34-9b9e-b4df16dd1cc2" (UID: "72783796-a3ab-4a34-9b9e-b4df16dd1cc2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.927639 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-kube-api-access-hz24f" (OuterVolumeSpecName: "kube-api-access-hz24f") pod "72783796-a3ab-4a34-9b9e-b4df16dd1cc2" (UID: "72783796-a3ab-4a34-9b9e-b4df16dd1cc2"). InnerVolumeSpecName "kube-api-access-hz24f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.928407 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e951776-970a-49b9-8f34-b2fd129bbc39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e951776-970a-49b9-8f34-b2fd129bbc39" (UID: "4e951776-970a-49b9-8f34-b2fd129bbc39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.928471 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e951776-970a-49b9-8f34-b2fd129bbc39-kube-api-access-bs7fm" (OuterVolumeSpecName: "kube-api-access-bs7fm") pod "4e951776-970a-49b9-8f34-b2fd129bbc39" (UID: "4e951776-970a-49b9-8f34-b2fd129bbc39"). InnerVolumeSpecName "kube-api-access-bs7fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:18:58 crc kubenswrapper[4727]: I1121 20:18:58.928923 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "72783796-a3ab-4a34-9b9e-b4df16dd1cc2" (UID: "72783796-a3ab-4a34-9b9e-b4df16dd1cc2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.021111 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.021166 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs7fm\" (UniqueName: \"kubernetes.io/projected/4e951776-970a-49b9-8f34-b2fd129bbc39-kube-api-access-bs7fm\") on node \"crc\" DevicePath \"\"" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.021178 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e951776-970a-49b9-8f34-b2fd129bbc39-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.021196 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.021207 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e951776-970a-49b9-8f34-b2fd129bbc39-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.021216 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.021225 4727 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.021233 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.021242 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz24f\" (UniqueName: \"kubernetes.io/projected/72783796-a3ab-4a34-9b9e-b4df16dd1cc2-kube-api-access-hz24f\") on node \"crc\" DevicePath \"\"" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.179548 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79"] Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.186056 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qjc79"] Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.193775 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hx72f"] Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.200209 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hx72f"] Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.510837 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e951776-970a-49b9-8f34-b2fd129bbc39" path="/var/lib/kubelet/pods/4e951776-970a-49b9-8f34-b2fd129bbc39/volumes" Nov 21 20:18:59 crc kubenswrapper[4727]: I1121 20:18:59.511429 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72783796-a3ab-4a34-9b9e-b4df16dd1cc2" path="/var/lib/kubelet/pods/72783796-a3ab-4a34-9b9e-b4df16dd1cc2/volumes" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.079538 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2"] Nov 21 20:19:00 crc kubenswrapper[4727]: E1121 20:19:00.080323 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e951776-970a-49b9-8f34-b2fd129bbc39" containerName="route-controller-manager" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.080349 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e951776-970a-49b9-8f34-b2fd129bbc39" containerName="route-controller-manager" Nov 21 20:19:00 crc kubenswrapper[4727]: E1121 20:19:00.080361 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72783796-a3ab-4a34-9b9e-b4df16dd1cc2" containerName="controller-manager" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.080369 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="72783796-a3ab-4a34-9b9e-b4df16dd1cc2" containerName="controller-manager" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.080505 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="72783796-a3ab-4a34-9b9e-b4df16dd1cc2" containerName="controller-manager" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.080524 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e951776-970a-49b9-8f34-b2fd129bbc39" containerName="route-controller-manager" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.081341 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.083163 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.083373 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.083366 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.083468 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.083895 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.085036 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.086422 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-98699bfc6-djlb8"] Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.087369 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.089923 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.090356 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.090924 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.091813 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.092021 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.093354 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.098927 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.104496 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-98699bfc6-djlb8"] Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.110615 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2"] Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.201008 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2"] Nov 21 20:19:00 crc kubenswrapper[4727]: E1121 20:19:00.201546 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-wcjlw serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" podUID="8fbc6c93-6b25-4499-8647-111b1e6598b3" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.237242 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-client-ca\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.237321 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd3b0904-8d49-41a7-af84-df42793bcf17-config\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.237416 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd3b0904-8d49-41a7-af84-df42793bcf17-client-ca\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.237583 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbc6c93-6b25-4499-8647-111b1e6598b3-serving-cert\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.237735 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcjlw\" (UniqueName: \"kubernetes.io/projected/8fbc6c93-6b25-4499-8647-111b1e6598b3-kube-api-access-wcjlw\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.237799 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-config\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.237854 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3b0904-8d49-41a7-af84-df42793bcf17-serving-cert\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.238061 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v42vk\" (UniqueName: \"kubernetes.io/projected/dd3b0904-8d49-41a7-af84-df42793bcf17-kube-api-access-v42vk\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.238189 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd3b0904-8d49-41a7-af84-df42793bcf17-proxy-ca-bundles\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.340298 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd3b0904-8d49-41a7-af84-df42793bcf17-config\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.340353 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd3b0904-8d49-41a7-af84-df42793bcf17-client-ca\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.340370 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbc6c93-6b25-4499-8647-111b1e6598b3-serving-cert\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.340410 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcjlw\" (UniqueName: \"kubernetes.io/projected/8fbc6c93-6b25-4499-8647-111b1e6598b3-kube-api-access-wcjlw\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.340439 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-config\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.340862 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3b0904-8d49-41a7-af84-df42793bcf17-serving-cert\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.340942 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v42vk\" (UniqueName: \"kubernetes.io/projected/dd3b0904-8d49-41a7-af84-df42793bcf17-kube-api-access-v42vk\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.341249 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd3b0904-8d49-41a7-af84-df42793bcf17-proxy-ca-bundles\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.342286 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd3b0904-8d49-41a7-af84-df42793bcf17-config\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.341490 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd3b0904-8d49-41a7-af84-df42793bcf17-client-ca\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.341899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-config\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.342228 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd3b0904-8d49-41a7-af84-df42793bcf17-proxy-ca-bundles\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.342376 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-client-ca\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.343062 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-client-ca\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.344795 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbc6c93-6b25-4499-8647-111b1e6598b3-serving-cert\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.347361 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3b0904-8d49-41a7-af84-df42793bcf17-serving-cert\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.357827 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcjlw\" (UniqueName: \"kubernetes.io/projected/8fbc6c93-6b25-4499-8647-111b1e6598b3-kube-api-access-wcjlw\") pod \"route-controller-manager-bc7f48584-tc4g2\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.359523 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v42vk\" (UniqueName: \"kubernetes.io/projected/dd3b0904-8d49-41a7-af84-df42793bcf17-kube-api-access-v42vk\") pod \"controller-manager-98699bfc6-djlb8\" (UID: \"dd3b0904-8d49-41a7-af84-df42793bcf17\") " pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.412394 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.640527 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-98699bfc6-djlb8"] Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.861622 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" event={"ID":"dd3b0904-8d49-41a7-af84-df42793bcf17","Type":"ContainerStarted","Data":"1675cc8586674bfe9872b32982f0acb0c47b8ddc7e5a554757b1533f8176d3b5"} Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.861672 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" event={"ID":"dd3b0904-8d49-41a7-af84-df42793bcf17","Type":"ContainerStarted","Data":"4131d05ae1de62fcc03655b7af2c41dbce08b474a9561411fc73b93efa1ba0ed"} Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.861639 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.861990 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.864614 4727 patch_prober.go:28] interesting pod/controller-manager-98699bfc6-djlb8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" start-of-body= Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.864671 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" podUID="dd3b0904-8d49-41a7-af84-df42793bcf17" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.80:8443/healthz\": dial tcp 10.217.0.80:8443: connect: connection refused" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.871873 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.949710 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcjlw\" (UniqueName: \"kubernetes.io/projected/8fbc6c93-6b25-4499-8647-111b1e6598b3-kube-api-access-wcjlw\") pod \"8fbc6c93-6b25-4499-8647-111b1e6598b3\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.949851 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-config\") pod \"8fbc6c93-6b25-4499-8647-111b1e6598b3\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.950017 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-client-ca\") pod \"8fbc6c93-6b25-4499-8647-111b1e6598b3\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.950067 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbc6c93-6b25-4499-8647-111b1e6598b3-serving-cert\") pod \"8fbc6c93-6b25-4499-8647-111b1e6598b3\" (UID: \"8fbc6c93-6b25-4499-8647-111b1e6598b3\") " Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.950662 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-config" (OuterVolumeSpecName: "config") pod "8fbc6c93-6b25-4499-8647-111b1e6598b3" (UID: "8fbc6c93-6b25-4499-8647-111b1e6598b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.951013 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "8fbc6c93-6b25-4499-8647-111b1e6598b3" (UID: "8fbc6c93-6b25-4499-8647-111b1e6598b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.968354 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fbc6c93-6b25-4499-8647-111b1e6598b3-kube-api-access-wcjlw" (OuterVolumeSpecName: "kube-api-access-wcjlw") pod "8fbc6c93-6b25-4499-8647-111b1e6598b3" (UID: "8fbc6c93-6b25-4499-8647-111b1e6598b3"). InnerVolumeSpecName "kube-api-access-wcjlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:19:00 crc kubenswrapper[4727]: I1121 20:19:00.968520 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbc6c93-6b25-4499-8647-111b1e6598b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8fbc6c93-6b25-4499-8647-111b1e6598b3" (UID: "8fbc6c93-6b25-4499-8647-111b1e6598b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.052134 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.052173 4727 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fbc6c93-6b25-4499-8647-111b1e6598b3-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.052190 4727 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbc6c93-6b25-4499-8647-111b1e6598b3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.052203 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcjlw\" (UniqueName: \"kubernetes.io/projected/8fbc6c93-6b25-4499-8647-111b1e6598b3-kube-api-access-wcjlw\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.868810 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.873671 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.890318 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-98699bfc6-djlb8" podStartSLOduration=3.890300102 podStartE2EDuration="3.890300102s" podCreationTimestamp="2025-11-21 20:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:19:00.880427684 +0000 UTC m=+746.066612748" watchObservedRunningTime="2025-11-21 20:19:01.890300102 +0000 UTC m=+747.076485146" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.899794 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2"] Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.911175 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk"] Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.913077 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.918175 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.918476 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.918667 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.918793 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.918899 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.919068 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.922552 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc7f48584-tc4g2"] Nov 21 20:19:01 crc kubenswrapper[4727]: I1121 20:19:01.940518 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk"] Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.068974 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6glk\" (UniqueName: \"kubernetes.io/projected/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-kube-api-access-b6glk\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.069032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-serving-cert\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.069089 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-config\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.069115 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-client-ca\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.171266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-config\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.171397 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-client-ca\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.171592 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6glk\" (UniqueName: \"kubernetes.io/projected/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-kube-api-access-b6glk\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.171697 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-serving-cert\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.172848 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-client-ca\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.172881 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-config\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.176718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-serving-cert\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.187683 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6glk\" (UniqueName: \"kubernetes.io/projected/cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e-kube-api-access-b6glk\") pod \"route-controller-manager-546d4bbcf7-qzsgk\" (UID: \"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e\") " pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.252709 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.285355 4727 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.285699 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="fc313c47-e815-4c94-b46b-51876e49ec0a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.697417 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk"] Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.887753 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" event={"ID":"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e","Type":"ContainerStarted","Data":"44fea3c56f94cfce7c37fc1876e9bd84647a2039905006e4f1c1fc9ddcd4b3af"} Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.888923 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" event={"ID":"cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e","Type":"ContainerStarted","Data":"99035725c4c9d54cebf2462fb0cc31ffc0a1a5ada9dc82abf2eb3e345c9c7fe2"} Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.889473 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.893095 4727 patch_prober.go:28] interesting pod/route-controller-manager-546d4bbcf7-qzsgk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.81:8443/healthz\": dial tcp 10.217.0.81:8443: connect: connection refused" start-of-body= Nov 21 20:19:02 crc kubenswrapper[4727]: I1121 20:19:02.893167 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" podUID="cdcb46a9-744f-4fa7-ae0d-5fb71f25de2e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.81:8443/healthz\": dial tcp 10.217.0.81:8443: connect: connection refused" Nov 21 20:19:03 crc kubenswrapper[4727]: I1121 20:19:03.508261 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fbc6c93-6b25-4499-8647-111b1e6598b3" path="/var/lib/kubelet/pods/8fbc6c93-6b25-4499-8647-111b1e6598b3/volumes" Nov 21 20:19:03 crc kubenswrapper[4727]: I1121 20:19:03.893862 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" Nov 21 20:19:03 crc kubenswrapper[4727]: I1121 20:19:03.913988 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-546d4bbcf7-qzsgk" podStartSLOduration=3.913950241 podStartE2EDuration="3.913950241s" podCreationTimestamp="2025-11-21 20:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:19:02.906495666 +0000 UTC m=+748.092680710" watchObservedRunningTime="2025-11-21 20:19:03.913950241 +0000 UTC m=+749.100135285" Nov 21 20:19:07 crc kubenswrapper[4727]: I1121 20:19:07.755924 4727 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 20:19:12 crc kubenswrapper[4727]: I1121 20:19:12.282547 4727 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 21 20:19:12 crc kubenswrapper[4727]: I1121 20:19:12.283106 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="fc313c47-e815-4c94-b46b-51876e49ec0a" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 20:19:13 crc kubenswrapper[4727]: I1121 20:19:13.335200 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:19:13 crc kubenswrapper[4727]: I1121 20:19:13.335270 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:19:22 crc kubenswrapper[4727]: I1121 20:19:22.281887 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.329451 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jbjsv"] Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.332505 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.347252 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbjsv"] Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.366581 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6r8d\" (UniqueName: \"kubernetes.io/projected/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-kube-api-access-c6r8d\") pod \"redhat-marketplace-jbjsv\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.366648 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-catalog-content\") pod \"redhat-marketplace-jbjsv\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.366672 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-utilities\") pod \"redhat-marketplace-jbjsv\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.468666 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6r8d\" (UniqueName: \"kubernetes.io/projected/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-kube-api-access-c6r8d\") pod \"redhat-marketplace-jbjsv\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.468721 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-catalog-content\") pod \"redhat-marketplace-jbjsv\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.468738 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-utilities\") pod \"redhat-marketplace-jbjsv\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.469240 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-catalog-content\") pod \"redhat-marketplace-jbjsv\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.469263 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-utilities\") pod \"redhat-marketplace-jbjsv\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.486291 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6r8d\" (UniqueName: \"kubernetes.io/projected/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-kube-api-access-c6r8d\") pod \"redhat-marketplace-jbjsv\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:31 crc kubenswrapper[4727]: I1121 20:19:31.654504 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:32 crc kubenswrapper[4727]: I1121 20:19:32.074432 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbjsv"] Nov 21 20:19:32 crc kubenswrapper[4727]: I1121 20:19:32.105659 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbjsv" event={"ID":"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145","Type":"ContainerStarted","Data":"b7aa10e3dbf8b7628a63b3ec0bef64c0ff001e28f3de991f0764808d46ba9d62"} Nov 21 20:19:33 crc kubenswrapper[4727]: I1121 20:19:33.116804 4727 generic.go:334] "Generic (PLEG): container finished" podID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerID="a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea" exitCode=0 Nov 21 20:19:33 crc kubenswrapper[4727]: I1121 20:19:33.116935 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbjsv" event={"ID":"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145","Type":"ContainerDied","Data":"a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea"} Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.126198 4727 generic.go:334] "Generic (PLEG): container finished" podID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerID="fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31" exitCode=0 Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.126291 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbjsv" event={"ID":"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145","Type":"ContainerDied","Data":"fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31"} Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.302040 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dq4zh"] Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.306761 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.308380 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dq4zh"] Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.423875 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbt9\" (UniqueName: \"kubernetes.io/projected/4c229597-6559-46fb-a8b5-f12a175663c0-kube-api-access-qnbt9\") pod \"redhat-operators-dq4zh\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.423968 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-catalog-content\") pod \"redhat-operators-dq4zh\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.424199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-utilities\") pod \"redhat-operators-dq4zh\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.526289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbt9\" (UniqueName: \"kubernetes.io/projected/4c229597-6559-46fb-a8b5-f12a175663c0-kube-api-access-qnbt9\") pod \"redhat-operators-dq4zh\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.526350 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-catalog-content\") pod \"redhat-operators-dq4zh\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.526424 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-utilities\") pod \"redhat-operators-dq4zh\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.526889 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-catalog-content\") pod \"redhat-operators-dq4zh\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.526949 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-utilities\") pod \"redhat-operators-dq4zh\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.546806 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbt9\" (UniqueName: \"kubernetes.io/projected/4c229597-6559-46fb-a8b5-f12a175663c0-kube-api-access-qnbt9\") pod \"redhat-operators-dq4zh\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:34 crc kubenswrapper[4727]: I1121 20:19:34.631487 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:35 crc kubenswrapper[4727]: W1121 20:19:35.060557 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c229597_6559_46fb_a8b5_f12a175663c0.slice/crio-ac6cf80511cb8fedf0fd77e0d4a71afee22fd5bbc581eb90ad42083138853212 WatchSource:0}: Error finding container ac6cf80511cb8fedf0fd77e0d4a71afee22fd5bbc581eb90ad42083138853212: Status 404 returned error can't find the container with id ac6cf80511cb8fedf0fd77e0d4a71afee22fd5bbc581eb90ad42083138853212 Nov 21 20:19:35 crc kubenswrapper[4727]: I1121 20:19:35.060831 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dq4zh"] Nov 21 20:19:35 crc kubenswrapper[4727]: I1121 20:19:35.135563 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq4zh" event={"ID":"4c229597-6559-46fb-a8b5-f12a175663c0","Type":"ContainerStarted","Data":"ac6cf80511cb8fedf0fd77e0d4a71afee22fd5bbc581eb90ad42083138853212"} Nov 21 20:19:35 crc kubenswrapper[4727]: I1121 20:19:35.139215 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbjsv" event={"ID":"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145","Type":"ContainerStarted","Data":"9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11"} Nov 21 20:19:35 crc kubenswrapper[4727]: I1121 20:19:35.160049 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jbjsv" podStartSLOduration=2.549289422 podStartE2EDuration="4.160031217s" podCreationTimestamp="2025-11-21 20:19:31 +0000 UTC" firstStartedPulling="2025-11-21 20:19:33.119941812 +0000 UTC m=+778.306126856" lastFinishedPulling="2025-11-21 20:19:34.730683607 +0000 UTC m=+779.916868651" observedRunningTime="2025-11-21 20:19:35.156079765 +0000 UTC m=+780.342264819" watchObservedRunningTime="2025-11-21 20:19:35.160031217 +0000 UTC m=+780.346216261" Nov 21 20:19:36 crc kubenswrapper[4727]: I1121 20:19:36.146704 4727 generic.go:334] "Generic (PLEG): container finished" podID="4c229597-6559-46fb-a8b5-f12a175663c0" containerID="a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151" exitCode=0 Nov 21 20:19:36 crc kubenswrapper[4727]: I1121 20:19:36.146808 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq4zh" event={"ID":"4c229597-6559-46fb-a8b5-f12a175663c0","Type":"ContainerDied","Data":"a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151"} Nov 21 20:19:37 crc kubenswrapper[4727]: I1121 20:19:37.154683 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq4zh" event={"ID":"4c229597-6559-46fb-a8b5-f12a175663c0","Type":"ContainerStarted","Data":"a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d"} Nov 21 20:19:38 crc kubenswrapper[4727]: I1121 20:19:38.163284 4727 generic.go:334] "Generic (PLEG): container finished" podID="4c229597-6559-46fb-a8b5-f12a175663c0" containerID="a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d" exitCode=0 Nov 21 20:19:38 crc kubenswrapper[4727]: I1121 20:19:38.163485 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq4zh" event={"ID":"4c229597-6559-46fb-a8b5-f12a175663c0","Type":"ContainerDied","Data":"a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d"} Nov 21 20:19:39 crc kubenswrapper[4727]: I1121 20:19:39.172422 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq4zh" event={"ID":"4c229597-6559-46fb-a8b5-f12a175663c0","Type":"ContainerStarted","Data":"8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf"} Nov 21 20:19:39 crc kubenswrapper[4727]: I1121 20:19:39.197675 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dq4zh" podStartSLOduration=2.792593985 podStartE2EDuration="5.197657794s" podCreationTimestamp="2025-11-21 20:19:34 +0000 UTC" firstStartedPulling="2025-11-21 20:19:36.148602286 +0000 UTC m=+781.334787330" lastFinishedPulling="2025-11-21 20:19:38.553666095 +0000 UTC m=+783.739851139" observedRunningTime="2025-11-21 20:19:39.190896696 +0000 UTC m=+784.377081730" watchObservedRunningTime="2025-11-21 20:19:39.197657794 +0000 UTC m=+784.383842838" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.192483 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-zft5v"] Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.193739 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.197068 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.198609 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.198807 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.198927 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-bj8zs" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.199089 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.208663 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.212796 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-zft5v"] Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222110 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfa898e9-6040-4f41-9405-5e8070ffb9af-tmp\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222169 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bfa898e9-6040-4f41-9405-5e8070ffb9af-datadir\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222195 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-sa-token\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222262 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9f2k\" (UniqueName: \"kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-kube-api-access-h9f2k\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222280 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-token\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222331 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-syslog-receiver\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222376 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-trusted-ca\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222404 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222433 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-entrypoint\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222514 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config-openshift-service-cacrt\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.222539 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-metrics\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.271712 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-zft5v"] Nov 21 20:19:41 crc kubenswrapper[4727]: E1121 20:19:41.272378 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-h9f2k metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-zft5v" podUID="bfa898e9-6040-4f41-9405-5e8070ffb9af" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9f2k\" (UniqueName: \"kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-kube-api-access-h9f2k\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-token\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324148 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-syslog-receiver\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324177 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-trusted-ca\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324198 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324216 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-entrypoint\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324265 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config-openshift-service-cacrt\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324286 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-metrics\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfa898e9-6040-4f41-9405-5e8070ffb9af-tmp\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324324 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bfa898e9-6040-4f41-9405-5e8070ffb9af-datadir\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.324345 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-sa-token\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.325351 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config-openshift-service-cacrt\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.325403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bfa898e9-6040-4f41-9405-5e8070ffb9af-datadir\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.325411 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-entrypoint\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.326056 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-trusted-ca\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.326086 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: E1121 20:19:41.326208 4727 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Nov 21 20:19:41 crc kubenswrapper[4727]: E1121 20:19:41.326255 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-metrics podName:bfa898e9-6040-4f41-9405-5e8070ffb9af nodeName:}" failed. No retries permitted until 2025-11-21 20:19:41.826240426 +0000 UTC m=+787.012425470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-metrics") pod "collector-zft5v" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af") : secret "collector-metrics" not found Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.330431 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfa898e9-6040-4f41-9405-5e8070ffb9af-tmp\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.333764 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-syslog-receiver\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.335591 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-token\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.347420 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-sa-token\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.349675 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9f2k\" (UniqueName: \"kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-kube-api-access-h9f2k\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.655704 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.655762 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.696205 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.837247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-metrics\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:41 crc kubenswrapper[4727]: I1121 20:19:41.840642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-metrics\") pod \"collector-zft5v\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " pod="openshift-logging/collector-zft5v" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.191816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zft5v" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.204298 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zft5v" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.232352 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347291 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-sa-token\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347370 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-metrics\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347403 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347442 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-trusted-ca\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347528 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config-openshift-service-cacrt\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347613 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-syslog-receiver\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347644 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-token\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347698 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9f2k\" (UniqueName: \"kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-kube-api-access-h9f2k\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347730 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-entrypoint\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347755 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bfa898e9-6040-4f41-9405-5e8070ffb9af-datadir\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.347813 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfa898e9-6040-4f41-9405-5e8070ffb9af-tmp\") pod \"bfa898e9-6040-4f41-9405-5e8070ffb9af\" (UID: \"bfa898e9-6040-4f41-9405-5e8070ffb9af\") " Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.348055 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.348054 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa898e9-6040-4f41-9405-5e8070ffb9af-datadir" (OuterVolumeSpecName: "datadir") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.348430 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.348502 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config" (OuterVolumeSpecName: "config") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.348594 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.348673 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.348701 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.348715 4727 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.348726 4727 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/bfa898e9-6040-4f41-9405-5e8070ffb9af-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.348735 4727 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/bfa898e9-6040-4f41-9405-5e8070ffb9af-datadir\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.350771 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.350793 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-sa-token" (OuterVolumeSpecName: "sa-token") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.351134 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-token" (OuterVolumeSpecName: "collector-token") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.351636 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-metrics" (OuterVolumeSpecName: "metrics") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.351656 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfa898e9-6040-4f41-9405-5e8070ffb9af-tmp" (OuterVolumeSpecName: "tmp") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.351881 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-kube-api-access-h9f2k" (OuterVolumeSpecName: "kube-api-access-h9f2k") pod "bfa898e9-6040-4f41-9405-5e8070ffb9af" (UID: "bfa898e9-6040-4f41-9405-5e8070ffb9af"). InnerVolumeSpecName "kube-api-access-h9f2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.449820 4727 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfa898e9-6040-4f41-9405-5e8070ffb9af-tmp\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.449859 4727 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.449870 4727 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.449880 4727 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.449890 4727 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/bfa898e9-6040-4f41-9405-5e8070ffb9af-collector-token\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.449899 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9f2k\" (UniqueName: \"kubernetes.io/projected/bfa898e9-6040-4f41-9405-5e8070ffb9af-kube-api-access-h9f2k\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:42 crc kubenswrapper[4727]: I1121 20:19:42.895424 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbjsv"] Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.198182 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-zft5v" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.251806 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-zft5v"] Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.251877 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-zft5v"] Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.284545 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-f4pkc"] Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.285840 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.287590 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.288079 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.289888 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.290251 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.292692 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-bj8zs" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.295628 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-f4pkc"] Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.304557 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.335212 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.335276 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.335319 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.335933 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8107c47f566f1faaec586576a9a03e2bdc957a4e69dfa96e7c5b9dd43a6f4ab5"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.336012 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://8107c47f566f1faaec586576a9a03e2bdc957a4e69dfa96e7c5b9dd43a6f4ab5" gracePeriod=600 Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.377727 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/99320c86-cce9-4dee-988e-adea1021bdbf-collector-syslog-receiver\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.378054 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-config\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.378084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/99320c86-cce9-4dee-988e-adea1021bdbf-collector-token\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.378140 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-config-openshift-service-cacrt\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.378159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/99320c86-cce9-4dee-988e-adea1021bdbf-sa-token\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.378205 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgj2\" (UniqueName: \"kubernetes.io/projected/99320c86-cce9-4dee-988e-adea1021bdbf-kube-api-access-9qgj2\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.378239 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-entrypoint\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.378287 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-trusted-ca\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.378381 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/99320c86-cce9-4dee-988e-adea1021bdbf-metrics\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.378407 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/99320c86-cce9-4dee-988e-adea1021bdbf-datadir\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.378444 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99320c86-cce9-4dee-988e-adea1021bdbf-tmp\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479211 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-config\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/99320c86-cce9-4dee-988e-adea1021bdbf-collector-token\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479287 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-config-openshift-service-cacrt\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/99320c86-cce9-4dee-988e-adea1021bdbf-sa-token\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479325 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgj2\" (UniqueName: \"kubernetes.io/projected/99320c86-cce9-4dee-988e-adea1021bdbf-kube-api-access-9qgj2\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479355 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-entrypoint\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-trusted-ca\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479409 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/99320c86-cce9-4dee-988e-adea1021bdbf-metrics\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479425 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/99320c86-cce9-4dee-988e-adea1021bdbf-datadir\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479461 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99320c86-cce9-4dee-988e-adea1021bdbf-tmp\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479504 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/99320c86-cce9-4dee-988e-adea1021bdbf-collector-syslog-receiver\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.479602 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/99320c86-cce9-4dee-988e-adea1021bdbf-datadir\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.480135 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-config-openshift-service-cacrt\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.480286 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-trusted-ca\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.480859 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-entrypoint\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.481156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99320c86-cce9-4dee-988e-adea1021bdbf-config\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.484204 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/99320c86-cce9-4dee-988e-adea1021bdbf-collector-token\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.484622 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/99320c86-cce9-4dee-988e-adea1021bdbf-collector-syslog-receiver\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.486074 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/99320c86-cce9-4dee-988e-adea1021bdbf-metrics\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.490278 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99320c86-cce9-4dee-988e-adea1021bdbf-tmp\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.497392 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgj2\" (UniqueName: \"kubernetes.io/projected/99320c86-cce9-4dee-988e-adea1021bdbf-kube-api-access-9qgj2\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.506688 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/99320c86-cce9-4dee-988e-adea1021bdbf-sa-token\") pod \"collector-f4pkc\" (UID: \"99320c86-cce9-4dee-988e-adea1021bdbf\") " pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.507290 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa898e9-6040-4f41-9405-5e8070ffb9af" path="/var/lib/kubelet/pods/bfa898e9-6040-4f41-9405-5e8070ffb9af/volumes" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.602978 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-f4pkc" Nov 21 20:19:43 crc kubenswrapper[4727]: I1121 20:19:43.846013 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-f4pkc"] Nov 21 20:19:44 crc kubenswrapper[4727]: I1121 20:19:44.207897 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-f4pkc" event={"ID":"99320c86-cce9-4dee-988e-adea1021bdbf","Type":"ContainerStarted","Data":"96f97c534612ba0a8ef10c7b33f1efd70683ca88ddf3f63d3527b0dd2b236066"} Nov 21 20:19:44 crc kubenswrapper[4727]: I1121 20:19:44.208108 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jbjsv" podUID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerName="registry-server" containerID="cri-o://9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11" gracePeriod=2 Nov 21 20:19:44 crc kubenswrapper[4727]: I1121 20:19:44.631867 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:44 crc kubenswrapper[4727]: I1121 20:19:44.632117 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:44 crc kubenswrapper[4727]: I1121 20:19:44.676222 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.218647 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="8107c47f566f1faaec586576a9a03e2bdc957a4e69dfa96e7c5b9dd43a6f4ab5" exitCode=0 Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.218746 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"8107c47f566f1faaec586576a9a03e2bdc957a4e69dfa96e7c5b9dd43a6f4ab5"} Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.218813 4727 scope.go:117] "RemoveContainer" containerID="31ac78ad9674edf30089e0f6eccedebce8b14f0ff1682633a29c88cf87b54891" Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.257427 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.827209 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.928650 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-utilities\") pod \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.928801 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-catalog-content\") pod \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.928831 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6r8d\" (UniqueName: \"kubernetes.io/projected/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-kube-api-access-c6r8d\") pod \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\" (UID: \"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145\") " Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.929843 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-utilities" (OuterVolumeSpecName: "utilities") pod "6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" (UID: "6944b8a6-ba5a-4b90-8dbf-22d3d2af9145"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.936004 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-kube-api-access-c6r8d" (OuterVolumeSpecName: "kube-api-access-c6r8d") pod "6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" (UID: "6944b8a6-ba5a-4b90-8dbf-22d3d2af9145"). InnerVolumeSpecName "kube-api-access-c6r8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:19:45 crc kubenswrapper[4727]: I1121 20:19:45.951481 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" (UID: "6944b8a6-ba5a-4b90-8dbf-22d3d2af9145"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.030206 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.030254 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.030273 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6r8d\" (UniqueName: \"kubernetes.io/projected/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145-kube-api-access-c6r8d\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.228081 4727 generic.go:334] "Generic (PLEG): container finished" podID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerID="9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11" exitCode=0 Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.228132 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbjsv" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.228160 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbjsv" event={"ID":"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145","Type":"ContainerDied","Data":"9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11"} Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.228215 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbjsv" event={"ID":"6944b8a6-ba5a-4b90-8dbf-22d3d2af9145","Type":"ContainerDied","Data":"b7aa10e3dbf8b7628a63b3ec0bef64c0ff001e28f3de991f0764808d46ba9d62"} Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.228241 4727 scope.go:117] "RemoveContainer" containerID="9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.230159 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"99a25828b83906fc3ad93f0b2554a2773e360925428b620333e0ead4ac93025d"} Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.253007 4727 scope.go:117] "RemoveContainer" containerID="fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.274503 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbjsv"] Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.279892 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbjsv"] Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.282774 4727 scope.go:117] "RemoveContainer" containerID="a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.298896 4727 scope.go:117] "RemoveContainer" containerID="9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11" Nov 21 20:19:46 crc kubenswrapper[4727]: E1121 20:19:46.299328 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11\": container with ID starting with 9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11 not found: ID does not exist" containerID="9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.299367 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11"} err="failed to get container status \"9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11\": rpc error: code = NotFound desc = could not find container \"9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11\": container with ID starting with 9cf94a6f708db9763b45eb8cd015a4a3e1e504aaecffc122de84c9b787c7fe11 not found: ID does not exist" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.299392 4727 scope.go:117] "RemoveContainer" containerID="fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31" Nov 21 20:19:46 crc kubenswrapper[4727]: E1121 20:19:46.299922 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31\": container with ID starting with fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31 not found: ID does not exist" containerID="fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.299952 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31"} err="failed to get container status \"fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31\": rpc error: code = NotFound desc = could not find container \"fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31\": container with ID starting with fcbc74e251befd565a40ec3b81411e40affcc00c24183d6a6f2ea7c75eb39f31 not found: ID does not exist" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.299987 4727 scope.go:117] "RemoveContainer" containerID="a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea" Nov 21 20:19:46 crc kubenswrapper[4727]: E1121 20:19:46.300328 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea\": container with ID starting with a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea not found: ID does not exist" containerID="a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.300378 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea"} err="failed to get container status \"a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea\": rpc error: code = NotFound desc = could not find container \"a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea\": container with ID starting with a524e67e8a02554cadf91381437125b46854304b504f0b3de51957d49e19fbea not found: ID does not exist" Nov 21 20:19:46 crc kubenswrapper[4727]: I1121 20:19:46.696560 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dq4zh"] Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.238551 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dq4zh" podUID="4c229597-6559-46fb-a8b5-f12a175663c0" containerName="registry-server" containerID="cri-o://8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf" gracePeriod=2 Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.507083 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" path="/var/lib/kubelet/pods/6944b8a6-ba5a-4b90-8dbf-22d3d2af9145/volumes" Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.679008 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.856085 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-utilities\") pod \"4c229597-6559-46fb-a8b5-f12a175663c0\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.856295 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnbt9\" (UniqueName: \"kubernetes.io/projected/4c229597-6559-46fb-a8b5-f12a175663c0-kube-api-access-qnbt9\") pod \"4c229597-6559-46fb-a8b5-f12a175663c0\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.856319 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-catalog-content\") pod \"4c229597-6559-46fb-a8b5-f12a175663c0\" (UID: \"4c229597-6559-46fb-a8b5-f12a175663c0\") " Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.857821 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-utilities" (OuterVolumeSpecName: "utilities") pod "4c229597-6559-46fb-a8b5-f12a175663c0" (UID: "4c229597-6559-46fb-a8b5-f12a175663c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.862114 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c229597-6559-46fb-a8b5-f12a175663c0-kube-api-access-qnbt9" (OuterVolumeSpecName: "kube-api-access-qnbt9") pod "4c229597-6559-46fb-a8b5-f12a175663c0" (UID: "4c229597-6559-46fb-a8b5-f12a175663c0"). InnerVolumeSpecName "kube-api-access-qnbt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.935815 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c229597-6559-46fb-a8b5-f12a175663c0" (UID: "4c229597-6559-46fb-a8b5-f12a175663c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.957807 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.957850 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnbt9\" (UniqueName: \"kubernetes.io/projected/4c229597-6559-46fb-a8b5-f12a175663c0-kube-api-access-qnbt9\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:47 crc kubenswrapper[4727]: I1121 20:19:47.957865 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c229597-6559-46fb-a8b5-f12a175663c0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:19:48 crc kubenswrapper[4727]: I1121 20:19:48.246816 4727 generic.go:334] "Generic (PLEG): container finished" podID="4c229597-6559-46fb-a8b5-f12a175663c0" containerID="8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf" exitCode=0 Nov 21 20:19:48 crc kubenswrapper[4727]: I1121 20:19:48.246874 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq4zh" event={"ID":"4c229597-6559-46fb-a8b5-f12a175663c0","Type":"ContainerDied","Data":"8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf"} Nov 21 20:19:48 crc kubenswrapper[4727]: I1121 20:19:48.246921 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dq4zh" event={"ID":"4c229597-6559-46fb-a8b5-f12a175663c0","Type":"ContainerDied","Data":"ac6cf80511cb8fedf0fd77e0d4a71afee22fd5bbc581eb90ad42083138853212"} Nov 21 20:19:48 crc kubenswrapper[4727]: I1121 20:19:48.246946 4727 scope.go:117] "RemoveContainer" containerID="8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf" Nov 21 20:19:48 crc kubenswrapper[4727]: I1121 20:19:48.247598 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dq4zh" Nov 21 20:19:48 crc kubenswrapper[4727]: I1121 20:19:48.277991 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dq4zh"] Nov 21 20:19:48 crc kubenswrapper[4727]: I1121 20:19:48.281888 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dq4zh"] Nov 21 20:19:49 crc kubenswrapper[4727]: I1121 20:19:49.507326 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c229597-6559-46fb-a8b5-f12a175663c0" path="/var/lib/kubelet/pods/4c229597-6559-46fb-a8b5-f12a175663c0/volumes" Nov 21 20:19:51 crc kubenswrapper[4727]: I1121 20:19:51.062071 4727 scope.go:117] "RemoveContainer" containerID="a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d" Nov 21 20:19:51 crc kubenswrapper[4727]: I1121 20:19:51.082293 4727 scope.go:117] "RemoveContainer" containerID="a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151" Nov 21 20:19:51 crc kubenswrapper[4727]: I1121 20:19:51.125211 4727 scope.go:117] "RemoveContainer" containerID="8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf" Nov 21 20:19:51 crc kubenswrapper[4727]: E1121 20:19:51.125604 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf\": container with ID starting with 8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf not found: ID does not exist" containerID="8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf" Nov 21 20:19:51 crc kubenswrapper[4727]: I1121 20:19:51.125644 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf"} err="failed to get container status \"8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf\": rpc error: code = NotFound desc = could not find container \"8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf\": container with ID starting with 8f928ca74ba6813ebad04c315e0fd5fc381e0b2ca117fa5626f7ac510c1d3ecf not found: ID does not exist" Nov 21 20:19:51 crc kubenswrapper[4727]: I1121 20:19:51.125674 4727 scope.go:117] "RemoveContainer" containerID="a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d" Nov 21 20:19:51 crc kubenswrapper[4727]: E1121 20:19:51.125990 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d\": container with ID starting with a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d not found: ID does not exist" containerID="a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d" Nov 21 20:19:51 crc kubenswrapper[4727]: I1121 20:19:51.126023 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d"} err="failed to get container status \"a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d\": rpc error: code = NotFound desc = could not find container \"a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d\": container with ID starting with a9931111a388280c68634159c57c4a0a46c730585ef02095c037870748799f2d not found: ID does not exist" Nov 21 20:19:51 crc kubenswrapper[4727]: I1121 20:19:51.126049 4727 scope.go:117] "RemoveContainer" containerID="a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151" Nov 21 20:19:51 crc kubenswrapper[4727]: E1121 20:19:51.126289 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151\": container with ID starting with a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151 not found: ID does not exist" containerID="a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151" Nov 21 20:19:51 crc kubenswrapper[4727]: I1121 20:19:51.126316 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151"} err="failed to get container status \"a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151\": rpc error: code = NotFound desc = could not find container \"a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151\": container with ID starting with a4a3c1203c88d2d71722f6158ed420241956ddf466caafabd19ada0bc6f10151 not found: ID does not exist" Nov 21 20:19:52 crc kubenswrapper[4727]: I1121 20:19:52.278647 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-f4pkc" event={"ID":"99320c86-cce9-4dee-988e-adea1021bdbf","Type":"ContainerStarted","Data":"1f65ec637c631be25c2825dc269ee8da898e59eaffc4e0b1f7948a25a78f35bb"} Nov 21 20:19:52 crc kubenswrapper[4727]: I1121 20:19:52.300209 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-f4pkc" podStartSLOduration=2.003582124 podStartE2EDuration="9.300190216s" podCreationTimestamp="2025-11-21 20:19:43 +0000 UTC" firstStartedPulling="2025-11-21 20:19:43.855083983 +0000 UTC m=+789.041269027" lastFinishedPulling="2025-11-21 20:19:51.151692075 +0000 UTC m=+796.337877119" observedRunningTime="2025-11-21 20:19:52.296078173 +0000 UTC m=+797.482263227" watchObservedRunningTime="2025-11-21 20:19:52.300190216 +0000 UTC m=+797.486375260" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.007776 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x7tj5"] Nov 21 20:19:59 crc kubenswrapper[4727]: E1121 20:19:59.008385 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c229597-6559-46fb-a8b5-f12a175663c0" containerName="extract-content" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.008402 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c229597-6559-46fb-a8b5-f12a175663c0" containerName="extract-content" Nov 21 20:19:59 crc kubenswrapper[4727]: E1121 20:19:59.008418 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerName="registry-server" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.008427 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerName="registry-server" Nov 21 20:19:59 crc kubenswrapper[4727]: E1121 20:19:59.008442 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c229597-6559-46fb-a8b5-f12a175663c0" containerName="registry-server" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.008452 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c229597-6559-46fb-a8b5-f12a175663c0" containerName="registry-server" Nov 21 20:19:59 crc kubenswrapper[4727]: E1121 20:19:59.008462 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerName="extract-content" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.008470 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerName="extract-content" Nov 21 20:19:59 crc kubenswrapper[4727]: E1121 20:19:59.008484 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c229597-6559-46fb-a8b5-f12a175663c0" containerName="extract-utilities" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.008491 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c229597-6559-46fb-a8b5-f12a175663c0" containerName="extract-utilities" Nov 21 20:19:59 crc kubenswrapper[4727]: E1121 20:19:59.008525 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerName="extract-utilities" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.008533 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerName="extract-utilities" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.008731 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6944b8a6-ba5a-4b90-8dbf-22d3d2af9145" containerName="registry-server" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.008748 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c229597-6559-46fb-a8b5-f12a175663c0" containerName="registry-server" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.010093 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.070414 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7tj5"] Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.151152 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69qvn\" (UniqueName: \"kubernetes.io/projected/a6cbfb65-0fdc-4857-afa9-fcb6be703703-kube-api-access-69qvn\") pod \"community-operators-x7tj5\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.151264 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-utilities\") pod \"community-operators-x7tj5\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.151299 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-catalog-content\") pod \"community-operators-x7tj5\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.253298 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-utilities\") pod \"community-operators-x7tj5\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.253382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-catalog-content\") pod \"community-operators-x7tj5\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.253456 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69qvn\" (UniqueName: \"kubernetes.io/projected/a6cbfb65-0fdc-4857-afa9-fcb6be703703-kube-api-access-69qvn\") pod \"community-operators-x7tj5\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.253925 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-utilities\") pod \"community-operators-x7tj5\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.253994 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-catalog-content\") pod \"community-operators-x7tj5\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.272031 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69qvn\" (UniqueName: \"kubernetes.io/projected/a6cbfb65-0fdc-4857-afa9-fcb6be703703-kube-api-access-69qvn\") pod \"community-operators-x7tj5\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.386222 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:19:59 crc kubenswrapper[4727]: I1121 20:19:59.834415 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7tj5"] Nov 21 20:20:00 crc kubenswrapper[4727]: I1121 20:20:00.338436 4727 generic.go:334] "Generic (PLEG): container finished" podID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerID="6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb" exitCode=0 Nov 21 20:20:00 crc kubenswrapper[4727]: I1121 20:20:00.338502 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tj5" event={"ID":"a6cbfb65-0fdc-4857-afa9-fcb6be703703","Type":"ContainerDied","Data":"6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb"} Nov 21 20:20:00 crc kubenswrapper[4727]: I1121 20:20:00.338735 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tj5" event={"ID":"a6cbfb65-0fdc-4857-afa9-fcb6be703703","Type":"ContainerStarted","Data":"19af40c5d7415d8a3627cbc258ddea5c054f838b40e11b6677e917b133e44b32"} Nov 21 20:20:01 crc kubenswrapper[4727]: I1121 20:20:01.349122 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tj5" event={"ID":"a6cbfb65-0fdc-4857-afa9-fcb6be703703","Type":"ContainerStarted","Data":"4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2"} Nov 21 20:20:02 crc kubenswrapper[4727]: I1121 20:20:02.357363 4727 generic.go:334] "Generic (PLEG): container finished" podID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerID="4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2" exitCode=0 Nov 21 20:20:02 crc kubenswrapper[4727]: I1121 20:20:02.357403 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tj5" event={"ID":"a6cbfb65-0fdc-4857-afa9-fcb6be703703","Type":"ContainerDied","Data":"4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2"} Nov 21 20:20:03 crc kubenswrapper[4727]: I1121 20:20:03.371087 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tj5" event={"ID":"a6cbfb65-0fdc-4857-afa9-fcb6be703703","Type":"ContainerStarted","Data":"3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824"} Nov 21 20:20:03 crc kubenswrapper[4727]: I1121 20:20:03.397076 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x7tj5" podStartSLOduration=2.883694768 podStartE2EDuration="5.397055488s" podCreationTimestamp="2025-11-21 20:19:58 +0000 UTC" firstStartedPulling="2025-11-21 20:20:00.341578906 +0000 UTC m=+805.527763950" lastFinishedPulling="2025-11-21 20:20:02.854939626 +0000 UTC m=+808.041124670" observedRunningTime="2025-11-21 20:20:03.391008097 +0000 UTC m=+808.577193141" watchObservedRunningTime="2025-11-21 20:20:03.397055488 +0000 UTC m=+808.583240532" Nov 21 20:20:09 crc kubenswrapper[4727]: I1121 20:20:09.386878 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:20:09 crc kubenswrapper[4727]: I1121 20:20:09.387655 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:20:09 crc kubenswrapper[4727]: I1121 20:20:09.434879 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:20:09 crc kubenswrapper[4727]: I1121 20:20:09.489904 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:20:09 crc kubenswrapper[4727]: I1121 20:20:09.672700 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7tj5"] Nov 21 20:20:11 crc kubenswrapper[4727]: I1121 20:20:11.420466 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x7tj5" podUID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerName="registry-server" containerID="cri-o://3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824" gracePeriod=2 Nov 21 20:20:11 crc kubenswrapper[4727]: I1121 20:20:11.857437 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:20:11 crc kubenswrapper[4727]: I1121 20:20:11.878487 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-catalog-content\") pod \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " Nov 21 20:20:11 crc kubenswrapper[4727]: I1121 20:20:11.878593 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69qvn\" (UniqueName: \"kubernetes.io/projected/a6cbfb65-0fdc-4857-afa9-fcb6be703703-kube-api-access-69qvn\") pod \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " Nov 21 20:20:11 crc kubenswrapper[4727]: I1121 20:20:11.878624 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-utilities\") pod \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\" (UID: \"a6cbfb65-0fdc-4857-afa9-fcb6be703703\") " Nov 21 20:20:11 crc kubenswrapper[4727]: I1121 20:20:11.879904 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-utilities" (OuterVolumeSpecName: "utilities") pod "a6cbfb65-0fdc-4857-afa9-fcb6be703703" (UID: "a6cbfb65-0fdc-4857-afa9-fcb6be703703"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:20:11 crc kubenswrapper[4727]: I1121 20:20:11.895709 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cbfb65-0fdc-4857-afa9-fcb6be703703-kube-api-access-69qvn" (OuterVolumeSpecName: "kube-api-access-69qvn") pod "a6cbfb65-0fdc-4857-afa9-fcb6be703703" (UID: "a6cbfb65-0fdc-4857-afa9-fcb6be703703"). InnerVolumeSpecName "kube-api-access-69qvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:20:11 crc kubenswrapper[4727]: I1121 20:20:11.979737 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69qvn\" (UniqueName: \"kubernetes.io/projected/a6cbfb65-0fdc-4857-afa9-fcb6be703703-kube-api-access-69qvn\") on node \"crc\" DevicePath \"\"" Nov 21 20:20:11 crc kubenswrapper[4727]: I1121 20:20:11.980194 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.428389 4727 generic.go:334] "Generic (PLEG): container finished" podID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerID="3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824" exitCode=0 Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.428436 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tj5" event={"ID":"a6cbfb65-0fdc-4857-afa9-fcb6be703703","Type":"ContainerDied","Data":"3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824"} Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.428466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tj5" event={"ID":"a6cbfb65-0fdc-4857-afa9-fcb6be703703","Type":"ContainerDied","Data":"19af40c5d7415d8a3627cbc258ddea5c054f838b40e11b6677e917b133e44b32"} Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.428484 4727 scope.go:117] "RemoveContainer" containerID="3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.428481 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7tj5" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.443909 4727 scope.go:117] "RemoveContainer" containerID="4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.464725 4727 scope.go:117] "RemoveContainer" containerID="6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.484398 4727 scope.go:117] "RemoveContainer" containerID="3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824" Nov 21 20:20:12 crc kubenswrapper[4727]: E1121 20:20:12.484904 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824\": container with ID starting with 3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824 not found: ID does not exist" containerID="3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.484947 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824"} err="failed to get container status \"3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824\": rpc error: code = NotFound desc = could not find container \"3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824\": container with ID starting with 3205a6d0e09e7a550cf8356c381e8d5b3df6acf81e80180ac26a171308536824 not found: ID does not exist" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.484996 4727 scope.go:117] "RemoveContainer" containerID="4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2" Nov 21 20:20:12 crc kubenswrapper[4727]: E1121 20:20:12.485323 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2\": container with ID starting with 4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2 not found: ID does not exist" containerID="4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.485410 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2"} err="failed to get container status \"4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2\": rpc error: code = NotFound desc = could not find container \"4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2\": container with ID starting with 4be5b616dda7e8cc1636055be2ad23f3659b87ddcd2b67fa75df51504311edf2 not found: ID does not exist" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.485447 4727 scope.go:117] "RemoveContainer" containerID="6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb" Nov 21 20:20:12 crc kubenswrapper[4727]: E1121 20:20:12.485933 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb\": container with ID starting with 6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb not found: ID does not exist" containerID="6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.485991 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb"} err="failed to get container status \"6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb\": rpc error: code = NotFound desc = could not find container \"6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb\": container with ID starting with 6b08e74c02c4cecb3ffbe38fd11246634b10d8fdbf97245b1098484e35bffbdb not found: ID does not exist" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.611539 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6cbfb65-0fdc-4857-afa9-fcb6be703703" (UID: "a6cbfb65-0fdc-4857-afa9-fcb6be703703"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.688210 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6cbfb65-0fdc-4857-afa9-fcb6be703703-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.763381 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7tj5"] Nov 21 20:20:12 crc kubenswrapper[4727]: I1121 20:20:12.768708 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x7tj5"] Nov 21 20:20:13 crc kubenswrapper[4727]: I1121 20:20:13.514938 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" path="/var/lib/kubelet/pods/a6cbfb65-0fdc-4857-afa9-fcb6be703703/volumes" Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.833903 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c"] Nov 21 20:20:18 crc kubenswrapper[4727]: E1121 20:20:18.834716 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerName="extract-content" Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.834730 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerName="extract-content" Nov 21 20:20:18 crc kubenswrapper[4727]: E1121 20:20:18.834753 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerName="extract-utilities" Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.834760 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerName="extract-utilities" Nov 21 20:20:18 crc kubenswrapper[4727]: E1121 20:20:18.834772 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerName="registry-server" Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.834781 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerName="registry-server" Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.834927 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6cbfb65-0fdc-4857-afa9-fcb6be703703" containerName="registry-server" Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.835929 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.843706 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.850460 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c"] Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.984221 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgcn2\" (UniqueName: \"kubernetes.io/projected/a5e3b453-04ec-438a-afb8-e162baa32e8d-kube-api-access-fgcn2\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.984379 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:18 crc kubenswrapper[4727]: I1121 20:20:18.984407 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:19 crc kubenswrapper[4727]: I1121 20:20:19.085887 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:19 crc kubenswrapper[4727]: I1121 20:20:19.086318 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:19 crc kubenswrapper[4727]: I1121 20:20:19.086724 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:19 crc kubenswrapper[4727]: I1121 20:20:19.086765 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:19 crc kubenswrapper[4727]: I1121 20:20:19.087087 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgcn2\" (UniqueName: \"kubernetes.io/projected/a5e3b453-04ec-438a-afb8-e162baa32e8d-kube-api-access-fgcn2\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:19 crc kubenswrapper[4727]: I1121 20:20:19.106344 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgcn2\" (UniqueName: \"kubernetes.io/projected/a5e3b453-04ec-438a-afb8-e162baa32e8d-kube-api-access-fgcn2\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:19 crc kubenswrapper[4727]: I1121 20:20:19.153772 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:19 crc kubenswrapper[4727]: I1121 20:20:19.606780 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c"] Nov 21 20:20:20 crc kubenswrapper[4727]: I1121 20:20:20.487374 4727 generic.go:334] "Generic (PLEG): container finished" podID="a5e3b453-04ec-438a-afb8-e162baa32e8d" containerID="9c8597e3fbbbb0dc982775a4eeb86b95e4f0f74d95182d000c93bbbe0aa1ccfa" exitCode=0 Nov 21 20:20:20 crc kubenswrapper[4727]: I1121 20:20:20.487686 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" event={"ID":"a5e3b453-04ec-438a-afb8-e162baa32e8d","Type":"ContainerDied","Data":"9c8597e3fbbbb0dc982775a4eeb86b95e4f0f74d95182d000c93bbbe0aa1ccfa"} Nov 21 20:20:20 crc kubenswrapper[4727]: I1121 20:20:20.487887 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" event={"ID":"a5e3b453-04ec-438a-afb8-e162baa32e8d","Type":"ContainerStarted","Data":"8b62e91ff363de78cffa68b52fedb8942dcb3d89d9799d051666073e44564b63"} Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.501526 4727 generic.go:334] "Generic (PLEG): container finished" podID="a5e3b453-04ec-438a-afb8-e162baa32e8d" containerID="48c5faa6d40822835cae6a27df7936a3425c442be3c9099b30105a9d65cc1dd5" exitCode=0 Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.501671 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" event={"ID":"a5e3b453-04ec-438a-afb8-e162baa32e8d","Type":"ContainerDied","Data":"48c5faa6d40822835cae6a27df7936a3425c442be3c9099b30105a9d65cc1dd5"} Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.784395 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kjv24"] Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.785832 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.790204 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjv24"] Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.847775 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpkq\" (UniqueName: \"kubernetes.io/projected/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-kube-api-access-nvpkq\") pod \"certified-operators-kjv24\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.847867 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-catalog-content\") pod \"certified-operators-kjv24\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.847937 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-utilities\") pod \"certified-operators-kjv24\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.949140 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-utilities\") pod \"certified-operators-kjv24\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.949210 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvpkq\" (UniqueName: \"kubernetes.io/projected/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-kube-api-access-nvpkq\") pod \"certified-operators-kjv24\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.949283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-catalog-content\") pod \"certified-operators-kjv24\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.949726 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-catalog-content\") pod \"certified-operators-kjv24\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.950028 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-utilities\") pod \"certified-operators-kjv24\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:22 crc kubenswrapper[4727]: I1121 20:20:22.977368 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvpkq\" (UniqueName: \"kubernetes.io/projected/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-kube-api-access-nvpkq\") pod \"certified-operators-kjv24\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:23 crc kubenswrapper[4727]: I1121 20:20:23.099623 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:23 crc kubenswrapper[4727]: I1121 20:20:23.510846 4727 generic.go:334] "Generic (PLEG): container finished" podID="a5e3b453-04ec-438a-afb8-e162baa32e8d" containerID="d444049b04929b7e0d897d47b7c861956e46ba0bf9e3561a547daa0c5e6ada86" exitCode=0 Nov 21 20:20:23 crc kubenswrapper[4727]: I1121 20:20:23.510966 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" event={"ID":"a5e3b453-04ec-438a-afb8-e162baa32e8d","Type":"ContainerDied","Data":"d444049b04929b7e0d897d47b7c861956e46ba0bf9e3561a547daa0c5e6ada86"} Nov 21 20:20:23 crc kubenswrapper[4727]: I1121 20:20:23.573196 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjv24"] Nov 21 20:20:23 crc kubenswrapper[4727]: W1121 20:20:23.580597 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fc90623_8517_4cc1_a4f0_50aad4b2dcab.slice/crio-2725f3532003b6110f69309d0aa0ba1c8039d451b935376f8050a3a3ce8fd159 WatchSource:0}: Error finding container 2725f3532003b6110f69309d0aa0ba1c8039d451b935376f8050a3a3ce8fd159: Status 404 returned error can't find the container with id 2725f3532003b6110f69309d0aa0ba1c8039d451b935376f8050a3a3ce8fd159 Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.520274 4727 generic.go:334] "Generic (PLEG): container finished" podID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerID="791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a" exitCode=0 Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.520364 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjv24" event={"ID":"3fc90623-8517-4cc1-a4f0-50aad4b2dcab","Type":"ContainerDied","Data":"791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a"} Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.520669 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjv24" event={"ID":"3fc90623-8517-4cc1-a4f0-50aad4b2dcab","Type":"ContainerStarted","Data":"2725f3532003b6110f69309d0aa0ba1c8039d451b935376f8050a3a3ce8fd159"} Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.865698 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.882096 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-bundle\") pod \"a5e3b453-04ec-438a-afb8-e162baa32e8d\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.882918 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgcn2\" (UniqueName: \"kubernetes.io/projected/a5e3b453-04ec-438a-afb8-e162baa32e8d-kube-api-access-fgcn2\") pod \"a5e3b453-04ec-438a-afb8-e162baa32e8d\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.883370 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-util\") pod \"a5e3b453-04ec-438a-afb8-e162baa32e8d\" (UID: \"a5e3b453-04ec-438a-afb8-e162baa32e8d\") " Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.883746 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-bundle" (OuterVolumeSpecName: "bundle") pod "a5e3b453-04ec-438a-afb8-e162baa32e8d" (UID: "a5e3b453-04ec-438a-afb8-e162baa32e8d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.892296 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e3b453-04ec-438a-afb8-e162baa32e8d-kube-api-access-fgcn2" (OuterVolumeSpecName: "kube-api-access-fgcn2") pod "a5e3b453-04ec-438a-afb8-e162baa32e8d" (UID: "a5e3b453-04ec-438a-afb8-e162baa32e8d"). InnerVolumeSpecName "kube-api-access-fgcn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.914128 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-util" (OuterVolumeSpecName: "util") pod "a5e3b453-04ec-438a-afb8-e162baa32e8d" (UID: "a5e3b453-04ec-438a-afb8-e162baa32e8d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.986249 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-util\") on node \"crc\" DevicePath \"\"" Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.986285 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5e3b453-04ec-438a-afb8-e162baa32e8d-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:20:24 crc kubenswrapper[4727]: I1121 20:20:24.986295 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgcn2\" (UniqueName: \"kubernetes.io/projected/a5e3b453-04ec-438a-afb8-e162baa32e8d-kube-api-access-fgcn2\") on node \"crc\" DevicePath \"\"" Nov 21 20:20:25 crc kubenswrapper[4727]: I1121 20:20:25.529619 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" event={"ID":"a5e3b453-04ec-438a-afb8-e162baa32e8d","Type":"ContainerDied","Data":"8b62e91ff363de78cffa68b52fedb8942dcb3d89d9799d051666073e44564b63"} Nov 21 20:20:25 crc kubenswrapper[4727]: I1121 20:20:25.529663 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b62e91ff363de78cffa68b52fedb8942dcb3d89d9799d051666073e44564b63" Nov 21 20:20:25 crc kubenswrapper[4727]: I1121 20:20:25.529713 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c" Nov 21 20:20:26 crc kubenswrapper[4727]: I1121 20:20:26.537725 4727 generic.go:334] "Generic (PLEG): container finished" podID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerID="914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50" exitCode=0 Nov 21 20:20:26 crc kubenswrapper[4727]: I1121 20:20:26.537798 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjv24" event={"ID":"3fc90623-8517-4cc1-a4f0-50aad4b2dcab","Type":"ContainerDied","Data":"914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50"} Nov 21 20:20:27 crc kubenswrapper[4727]: I1121 20:20:27.547209 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjv24" event={"ID":"3fc90623-8517-4cc1-a4f0-50aad4b2dcab","Type":"ContainerStarted","Data":"432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c"} Nov 21 20:20:27 crc kubenswrapper[4727]: I1121 20:20:27.584224 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kjv24" podStartSLOduration=3.199668206 podStartE2EDuration="5.584207816s" podCreationTimestamp="2025-11-21 20:20:22 +0000 UTC" firstStartedPulling="2025-11-21 20:20:24.523653137 +0000 UTC m=+829.709838181" lastFinishedPulling="2025-11-21 20:20:26.908192727 +0000 UTC m=+832.094377791" observedRunningTime="2025-11-21 20:20:27.582678098 +0000 UTC m=+832.768863162" watchObservedRunningTime="2025-11-21 20:20:27.584207816 +0000 UTC m=+832.770392860" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.716134 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-4djhq"] Nov 21 20:20:29 crc kubenswrapper[4727]: E1121 20:20:29.716808 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e3b453-04ec-438a-afb8-e162baa32e8d" containerName="extract" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.716828 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e3b453-04ec-438a-afb8-e162baa32e8d" containerName="extract" Nov 21 20:20:29 crc kubenswrapper[4727]: E1121 20:20:29.716863 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e3b453-04ec-438a-afb8-e162baa32e8d" containerName="pull" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.716876 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e3b453-04ec-438a-afb8-e162baa32e8d" containerName="pull" Nov 21 20:20:29 crc kubenswrapper[4727]: E1121 20:20:29.716908 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e3b453-04ec-438a-afb8-e162baa32e8d" containerName="util" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.716935 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e3b453-04ec-438a-afb8-e162baa32e8d" containerName="util" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.717154 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e3b453-04ec-438a-afb8-e162baa32e8d" containerName="extract" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.717941 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-4djhq" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.721570 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.721656 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bzmw7" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.725959 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.726384 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-4djhq"] Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.853747 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl5f4\" (UniqueName: \"kubernetes.io/projected/7be0d584-a880-410f-9592-8020cd27eb60-kube-api-access-tl5f4\") pod \"nmstate-operator-557fdffb88-4djhq\" (UID: \"7be0d584-a880-410f-9592-8020cd27eb60\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-4djhq" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.955106 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5f4\" (UniqueName: \"kubernetes.io/projected/7be0d584-a880-410f-9592-8020cd27eb60-kube-api-access-tl5f4\") pod \"nmstate-operator-557fdffb88-4djhq\" (UID: \"7be0d584-a880-410f-9592-8020cd27eb60\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-4djhq" Nov 21 20:20:29 crc kubenswrapper[4727]: I1121 20:20:29.973162 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl5f4\" (UniqueName: \"kubernetes.io/projected/7be0d584-a880-410f-9592-8020cd27eb60-kube-api-access-tl5f4\") pod \"nmstate-operator-557fdffb88-4djhq\" (UID: \"7be0d584-a880-410f-9592-8020cd27eb60\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-4djhq" Nov 21 20:20:30 crc kubenswrapper[4727]: I1121 20:20:30.035841 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-4djhq" Nov 21 20:20:30 crc kubenswrapper[4727]: I1121 20:20:30.563778 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-4djhq"] Nov 21 20:20:31 crc kubenswrapper[4727]: I1121 20:20:31.578368 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-4djhq" event={"ID":"7be0d584-a880-410f-9592-8020cd27eb60","Type":"ContainerStarted","Data":"f43d22ae5b7baf9709f7ea2d2be2b868b121c7f2e7ff1bcd357de40ae6c72afc"} Nov 21 20:20:33 crc kubenswrapper[4727]: I1121 20:20:33.100518 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:33 crc kubenswrapper[4727]: I1121 20:20:33.100953 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:33 crc kubenswrapper[4727]: I1121 20:20:33.138455 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:33 crc kubenswrapper[4727]: I1121 20:20:33.592040 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-4djhq" event={"ID":"7be0d584-a880-410f-9592-8020cd27eb60","Type":"ContainerStarted","Data":"e81672d4d42e3b3085cc905cb0703539cf567a059d7b6af53e440db790e0fdc9"} Nov 21 20:20:33 crc kubenswrapper[4727]: I1121 20:20:33.612444 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-4djhq" podStartSLOduration=2.589271205 podStartE2EDuration="4.612427481s" podCreationTimestamp="2025-11-21 20:20:29 +0000 UTC" firstStartedPulling="2025-11-21 20:20:30.568419346 +0000 UTC m=+835.754604390" lastFinishedPulling="2025-11-21 20:20:32.591575632 +0000 UTC m=+837.777760666" observedRunningTime="2025-11-21 20:20:33.608888793 +0000 UTC m=+838.795073857" watchObservedRunningTime="2025-11-21 20:20:33.612427481 +0000 UTC m=+838.798612525" Nov 21 20:20:33 crc kubenswrapper[4727]: I1121 20:20:33.639378 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:35 crc kubenswrapper[4727]: I1121 20:20:35.371913 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjv24"] Nov 21 20:20:35 crc kubenswrapper[4727]: I1121 20:20:35.606927 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kjv24" podUID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerName="registry-server" containerID="cri-o://432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c" gracePeriod=2 Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.152799 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.259520 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-catalog-content\") pod \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.259595 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-utilities\") pod \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.259652 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvpkq\" (UniqueName: \"kubernetes.io/projected/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-kube-api-access-nvpkq\") pod \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\" (UID: \"3fc90623-8517-4cc1-a4f0-50aad4b2dcab\") " Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.260332 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-utilities" (OuterVolumeSpecName: "utilities") pod "3fc90623-8517-4cc1-a4f0-50aad4b2dcab" (UID: "3fc90623-8517-4cc1-a4f0-50aad4b2dcab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.264667 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-kube-api-access-nvpkq" (OuterVolumeSpecName: "kube-api-access-nvpkq") pod "3fc90623-8517-4cc1-a4f0-50aad4b2dcab" (UID: "3fc90623-8517-4cc1-a4f0-50aad4b2dcab"). InnerVolumeSpecName "kube-api-access-nvpkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.304399 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fc90623-8517-4cc1-a4f0-50aad4b2dcab" (UID: "3fc90623-8517-4cc1-a4f0-50aad4b2dcab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.361339 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.361376 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.361387 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvpkq\" (UniqueName: \"kubernetes.io/projected/3fc90623-8517-4cc1-a4f0-50aad4b2dcab-kube-api-access-nvpkq\") on node \"crc\" DevicePath \"\"" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.616738 4727 generic.go:334] "Generic (PLEG): container finished" podID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerID="432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c" exitCode=0 Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.616787 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjv24" event={"ID":"3fc90623-8517-4cc1-a4f0-50aad4b2dcab","Type":"ContainerDied","Data":"432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c"} Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.616813 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjv24" event={"ID":"3fc90623-8517-4cc1-a4f0-50aad4b2dcab","Type":"ContainerDied","Data":"2725f3532003b6110f69309d0aa0ba1c8039d451b935376f8050a3a3ce8fd159"} Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.616836 4727 scope.go:117] "RemoveContainer" containerID="432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.619431 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjv24" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.646371 4727 scope.go:117] "RemoveContainer" containerID="914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.659098 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjv24"] Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.662672 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kjv24"] Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.680210 4727 scope.go:117] "RemoveContainer" containerID="791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.697252 4727 scope.go:117] "RemoveContainer" containerID="432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c" Nov 21 20:20:36 crc kubenswrapper[4727]: E1121 20:20:36.697633 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c\": container with ID starting with 432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c not found: ID does not exist" containerID="432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.697681 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c"} err="failed to get container status \"432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c\": rpc error: code = NotFound desc = could not find container \"432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c\": container with ID starting with 432dcef5f8ef3f4c7821ed53be5c534a5b10f1bfeb403ae138fc7435e23df35c not found: ID does not exist" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.697707 4727 scope.go:117] "RemoveContainer" containerID="914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50" Nov 21 20:20:36 crc kubenswrapper[4727]: E1121 20:20:36.698110 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50\": container with ID starting with 914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50 not found: ID does not exist" containerID="914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.698153 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50"} err="failed to get container status \"914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50\": rpc error: code = NotFound desc = could not find container \"914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50\": container with ID starting with 914b7022bbe607e1734f5cab5a457984f3309cffb76318acd0d9252bc29d9e50 not found: ID does not exist" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.698184 4727 scope.go:117] "RemoveContainer" containerID="791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a" Nov 21 20:20:36 crc kubenswrapper[4727]: E1121 20:20:36.698481 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a\": container with ID starting with 791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a not found: ID does not exist" containerID="791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a" Nov 21 20:20:36 crc kubenswrapper[4727]: I1121 20:20:36.698511 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a"} err="failed to get container status \"791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a\": rpc error: code = NotFound desc = could not find container \"791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a\": container with ID starting with 791a3809c80dee6d9e795865352631b0b3272bcc6089c5da6a31d6cf57ba0b3a not found: ID does not exist" Nov 21 20:20:37 crc kubenswrapper[4727]: I1121 20:20:37.507214 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" path="/var/lib/kubelet/pods/3fc90623-8517-4cc1-a4f0-50aad4b2dcab/volumes" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.001638 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm"] Nov 21 20:20:40 crc kubenswrapper[4727]: E1121 20:20:40.002438 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerName="extract-content" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.002453 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerName="extract-content" Nov 21 20:20:40 crc kubenswrapper[4727]: E1121 20:20:40.002480 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerName="registry-server" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.002487 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerName="registry-server" Nov 21 20:20:40 crc kubenswrapper[4727]: E1121 20:20:40.002522 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerName="extract-utilities" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.002529 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerName="extract-utilities" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.002756 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc90623-8517-4cc1-a4f0-50aad4b2dcab" containerName="registry-server" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.004430 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.018563 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm"] Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.020045 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q98nz\" (UniqueName: \"kubernetes.io/projected/80103952-99f2-44aa-b4e0-17e3329d39b5-kube-api-access-q98nz\") pod \"nmstate-metrics-5dcf9c57c5-blwbm\" (UID: \"80103952-99f2-44aa-b4e0-17e3329d39b5\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.031952 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77"] Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.033552 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.034042 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gkdbq" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.036351 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.055901 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77"] Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.059063 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-b5lmb"] Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.069102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.121639 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf28z\" (UniqueName: \"kubernetes.io/projected/28847e06-d0f5-423c-9b74-73ad939413ec-kube-api-access-wf28z\") pod \"nmstate-webhook-6b89b748d8-xbm77\" (UID: \"28847e06-d0f5-423c-9b74-73ad939413ec\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.121717 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/28847e06-d0f5-423c-9b74-73ad939413ec-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-xbm77\" (UID: \"28847e06-d0f5-423c-9b74-73ad939413ec\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.121754 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/19904852-2792-4d5c-92e0-b304589c1eb8-ovs-socket\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.121779 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/19904852-2792-4d5c-92e0-b304589c1eb8-nmstate-lock\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.121863 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q98nz\" (UniqueName: \"kubernetes.io/projected/80103952-99f2-44aa-b4e0-17e3329d39b5-kube-api-access-q98nz\") pod \"nmstate-metrics-5dcf9c57c5-blwbm\" (UID: \"80103952-99f2-44aa-b4e0-17e3329d39b5\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.121887 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/19904852-2792-4d5c-92e0-b304589c1eb8-dbus-socket\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.121900 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkn69\" (UniqueName: \"kubernetes.io/projected/19904852-2792-4d5c-92e0-b304589c1eb8-kube-api-access-jkn69\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.133443 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n"] Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.134343 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.141318 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.141550 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.141760 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-77zjs" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.155176 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n"] Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.182727 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q98nz\" (UniqueName: \"kubernetes.io/projected/80103952-99f2-44aa-b4e0-17e3329d39b5-kube-api-access-q98nz\") pod \"nmstate-metrics-5dcf9c57c5-blwbm\" (UID: \"80103952-99f2-44aa-b4e0-17e3329d39b5\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.222986 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/19904852-2792-4d5c-92e0-b304589c1eb8-dbus-socket\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.223033 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkn69\" (UniqueName: \"kubernetes.io/projected/19904852-2792-4d5c-92e0-b304589c1eb8-kube-api-access-jkn69\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.223069 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/12ebd943-fc8a-44a8-b99d-4629bbf01d9f-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-8x69n\" (UID: \"12ebd943-fc8a-44a8-b99d-4629bbf01d9f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.223095 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf28z\" (UniqueName: \"kubernetes.io/projected/28847e06-d0f5-423c-9b74-73ad939413ec-kube-api-access-wf28z\") pod \"nmstate-webhook-6b89b748d8-xbm77\" (UID: \"28847e06-d0f5-423c-9b74-73ad939413ec\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.223155 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/28847e06-d0f5-423c-9b74-73ad939413ec-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-xbm77\" (UID: \"28847e06-d0f5-423c-9b74-73ad939413ec\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.223181 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb9rw\" (UniqueName: \"kubernetes.io/projected/12ebd943-fc8a-44a8-b99d-4629bbf01d9f-kube-api-access-wb9rw\") pod \"nmstate-console-plugin-5874bd7bc5-8x69n\" (UID: \"12ebd943-fc8a-44a8-b99d-4629bbf01d9f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.223204 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/19904852-2792-4d5c-92e0-b304589c1eb8-ovs-socket\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.223225 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/12ebd943-fc8a-44a8-b99d-4629bbf01d9f-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-8x69n\" (UID: \"12ebd943-fc8a-44a8-b99d-4629bbf01d9f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.223247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/19904852-2792-4d5c-92e0-b304589c1eb8-nmstate-lock\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.223330 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/19904852-2792-4d5c-92e0-b304589c1eb8-nmstate-lock\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.223552 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/19904852-2792-4d5c-92e0-b304589c1eb8-dbus-socket\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.224493 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/19904852-2792-4d5c-92e0-b304589c1eb8-ovs-socket\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.227037 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/28847e06-d0f5-423c-9b74-73ad939413ec-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-xbm77\" (UID: \"28847e06-d0f5-423c-9b74-73ad939413ec\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.238020 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkn69\" (UniqueName: \"kubernetes.io/projected/19904852-2792-4d5c-92e0-b304589c1eb8-kube-api-access-jkn69\") pod \"nmstate-handler-b5lmb\" (UID: \"19904852-2792-4d5c-92e0-b304589c1eb8\") " pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.245426 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf28z\" (UniqueName: \"kubernetes.io/projected/28847e06-d0f5-423c-9b74-73ad939413ec-kube-api-access-wf28z\") pod \"nmstate-webhook-6b89b748d8-xbm77\" (UID: \"28847e06-d0f5-423c-9b74-73ad939413ec\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.324042 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb9rw\" (UniqueName: \"kubernetes.io/projected/12ebd943-fc8a-44a8-b99d-4629bbf01d9f-kube-api-access-wb9rw\") pod \"nmstate-console-plugin-5874bd7bc5-8x69n\" (UID: \"12ebd943-fc8a-44a8-b99d-4629bbf01d9f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.324121 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/12ebd943-fc8a-44a8-b99d-4629bbf01d9f-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-8x69n\" (UID: \"12ebd943-fc8a-44a8-b99d-4629bbf01d9f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.324201 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/12ebd943-fc8a-44a8-b99d-4629bbf01d9f-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-8x69n\" (UID: \"12ebd943-fc8a-44a8-b99d-4629bbf01d9f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.326007 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/12ebd943-fc8a-44a8-b99d-4629bbf01d9f-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-8x69n\" (UID: \"12ebd943-fc8a-44a8-b99d-4629bbf01d9f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.328118 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-777bc7647-cdqwj"] Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.328951 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.340568 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/12ebd943-fc8a-44a8-b99d-4629bbf01d9f-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-8x69n\" (UID: \"12ebd943-fc8a-44a8-b99d-4629bbf01d9f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.373696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb9rw\" (UniqueName: \"kubernetes.io/projected/12ebd943-fc8a-44a8-b99d-4629bbf01d9f-kube-api-access-wb9rw\") pod \"nmstate-console-plugin-5874bd7bc5-8x69n\" (UID: \"12ebd943-fc8a-44a8-b99d-4629bbf01d9f\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.374102 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.387443 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.398378 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.425791 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-oauth-serving-cert\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.425847 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-oauth-config\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.425882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-service-ca\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.425897 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-trusted-ca-bundle\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.425940 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-config\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.425970 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqwf\" (UniqueName: \"kubernetes.io/projected/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-kube-api-access-hbqwf\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.426011 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-serving-cert\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.436922 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-777bc7647-cdqwj"] Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.461299 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.528185 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-oauth-config\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.528263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-service-ca\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.528317 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-trusted-ca-bundle\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.528402 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-config\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.528425 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbqwf\" (UniqueName: \"kubernetes.io/projected/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-kube-api-access-hbqwf\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.528504 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-serving-cert\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.528551 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-oauth-serving-cert\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.529861 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-config\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.530462 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-trusted-ca-bundle\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.530771 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-oauth-serving-cert\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.530988 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-service-ca\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.533673 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-oauth-config\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.535430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-serving-cert\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.553596 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbqwf\" (UniqueName: \"kubernetes.io/projected/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-kube-api-access-hbqwf\") pod \"console-777bc7647-cdqwj\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.645011 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b5lmb" event={"ID":"19904852-2792-4d5c-92e0-b304589c1eb8","Type":"ContainerStarted","Data":"a028b1b979a642ff29777e8962cec1a327aecfdfa4665e9d94a99c6ac9a61f35"} Nov 21 20:20:40 crc kubenswrapper[4727]: I1121 20:20:40.668362 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:41 crc kubenswrapper[4727]: I1121 20:20:41.013541 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77"] Nov 21 20:20:41 crc kubenswrapper[4727]: W1121 20:20:41.015181 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28847e06_d0f5_423c_9b74_73ad939413ec.slice/crio-7299fe32125c1500a41a35a63345cbb98dd19d73410b5039c2b31594875ae690 WatchSource:0}: Error finding container 7299fe32125c1500a41a35a63345cbb98dd19d73410b5039c2b31594875ae690: Status 404 returned error can't find the container with id 7299fe32125c1500a41a35a63345cbb98dd19d73410b5039c2b31594875ae690 Nov 21 20:20:41 crc kubenswrapper[4727]: I1121 20:20:41.029976 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm"] Nov 21 20:20:41 crc kubenswrapper[4727]: W1121 20:20:41.034393 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80103952_99f2_44aa_b4e0_17e3329d39b5.slice/crio-063a8b5c4e81b7679f6e284f5670a5b406acdd670587a971a1bce301fee0aa71 WatchSource:0}: Error finding container 063a8b5c4e81b7679f6e284f5670a5b406acdd670587a971a1bce301fee0aa71: Status 404 returned error can't find the container with id 063a8b5c4e81b7679f6e284f5670a5b406acdd670587a971a1bce301fee0aa71 Nov 21 20:20:41 crc kubenswrapper[4727]: W1121 20:20:41.135270 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d5b5d37_a53b_4d18_ac63_f53e823dab2c.slice/crio-87278ac7aba7dd967de12712f206fc4a7ddaf5955bd22197563739658400a094 WatchSource:0}: Error finding container 87278ac7aba7dd967de12712f206fc4a7ddaf5955bd22197563739658400a094: Status 404 returned error can't find the container with id 87278ac7aba7dd967de12712f206fc4a7ddaf5955bd22197563739658400a094 Nov 21 20:20:41 crc kubenswrapper[4727]: I1121 20:20:41.135938 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-777bc7647-cdqwj"] Nov 21 20:20:41 crc kubenswrapper[4727]: W1121 20:20:41.138305 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12ebd943_fc8a_44a8_b99d_4629bbf01d9f.slice/crio-770034fa664115da3fb6013096b6349b35923bd685ecc0bdde8f1d6b3bcec507 WatchSource:0}: Error finding container 770034fa664115da3fb6013096b6349b35923bd685ecc0bdde8f1d6b3bcec507: Status 404 returned error can't find the container with id 770034fa664115da3fb6013096b6349b35923bd685ecc0bdde8f1d6b3bcec507 Nov 21 20:20:41 crc kubenswrapper[4727]: I1121 20:20:41.141741 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n"] Nov 21 20:20:41 crc kubenswrapper[4727]: I1121 20:20:41.654406 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm" event={"ID":"80103952-99f2-44aa-b4e0-17e3329d39b5","Type":"ContainerStarted","Data":"063a8b5c4e81b7679f6e284f5670a5b406acdd670587a971a1bce301fee0aa71"} Nov 21 20:20:41 crc kubenswrapper[4727]: I1121 20:20:41.656299 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" event={"ID":"12ebd943-fc8a-44a8-b99d-4629bbf01d9f","Type":"ContainerStarted","Data":"770034fa664115da3fb6013096b6349b35923bd685ecc0bdde8f1d6b3bcec507"} Nov 21 20:20:41 crc kubenswrapper[4727]: I1121 20:20:41.658455 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-777bc7647-cdqwj" event={"ID":"4d5b5d37-a53b-4d18-ac63-f53e823dab2c","Type":"ContainerStarted","Data":"f9d24eaa330e498f4977caf8a713f76ce84d446343a6b7a1e0b61e7c651d57db"} Nov 21 20:20:41 crc kubenswrapper[4727]: I1121 20:20:41.658508 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-777bc7647-cdqwj" event={"ID":"4d5b5d37-a53b-4d18-ac63-f53e823dab2c","Type":"ContainerStarted","Data":"87278ac7aba7dd967de12712f206fc4a7ddaf5955bd22197563739658400a094"} Nov 21 20:20:41 crc kubenswrapper[4727]: I1121 20:20:41.661913 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" event={"ID":"28847e06-d0f5-423c-9b74-73ad939413ec","Type":"ContainerStarted","Data":"7299fe32125c1500a41a35a63345cbb98dd19d73410b5039c2b31594875ae690"} Nov 21 20:20:41 crc kubenswrapper[4727]: I1121 20:20:41.682362 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-777bc7647-cdqwj" podStartSLOduration=1.6823439740000001 podStartE2EDuration="1.682343974s" podCreationTimestamp="2025-11-21 20:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:20:41.675031451 +0000 UTC m=+846.861216505" watchObservedRunningTime="2025-11-21 20:20:41.682343974 +0000 UTC m=+846.868529018" Nov 21 20:20:44 crc kubenswrapper[4727]: I1121 20:20:44.687851 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" event={"ID":"28847e06-d0f5-423c-9b74-73ad939413ec","Type":"ContainerStarted","Data":"751a3074b1b0c7c5a71d1c64db871fb0905e243902041255a5f4acd24e2a218b"} Nov 21 20:20:44 crc kubenswrapper[4727]: I1121 20:20:44.688555 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" Nov 21 20:20:44 crc kubenswrapper[4727]: I1121 20:20:44.689517 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm" event={"ID":"80103952-99f2-44aa-b4e0-17e3329d39b5","Type":"ContainerStarted","Data":"07693e8a3638ef144278e96b0b8191b687abc2de18901edad10fccefd24ba2bb"} Nov 21 20:20:44 crc kubenswrapper[4727]: I1121 20:20:44.691047 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" event={"ID":"12ebd943-fc8a-44a8-b99d-4629bbf01d9f","Type":"ContainerStarted","Data":"d44223696e1d41aa1b4ff695348f552cb13ef100780d1a488e9922e13fed2c27"} Nov 21 20:20:44 crc kubenswrapper[4727]: I1121 20:20:44.692236 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b5lmb" event={"ID":"19904852-2792-4d5c-92e0-b304589c1eb8","Type":"ContainerStarted","Data":"2b393fab1327397d16eb37437188f2f5afcf022bf70a4e6b5076cbfef81802e5"} Nov 21 20:20:44 crc kubenswrapper[4727]: I1121 20:20:44.692655 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:44 crc kubenswrapper[4727]: I1121 20:20:44.704487 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" podStartSLOduration=2.985704711 podStartE2EDuration="5.704472924s" podCreationTimestamp="2025-11-21 20:20:39 +0000 UTC" firstStartedPulling="2025-11-21 20:20:41.021532994 +0000 UTC m=+846.207718038" lastFinishedPulling="2025-11-21 20:20:43.740301207 +0000 UTC m=+848.926486251" observedRunningTime="2025-11-21 20:20:44.702170187 +0000 UTC m=+849.888355231" watchObservedRunningTime="2025-11-21 20:20:44.704472924 +0000 UTC m=+849.890657968" Nov 21 20:20:44 crc kubenswrapper[4727]: I1121 20:20:44.731081 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8x69n" podStartSLOduration=2.15893971 podStartE2EDuration="4.73106151s" podCreationTimestamp="2025-11-21 20:20:40 +0000 UTC" firstStartedPulling="2025-11-21 20:20:41.148289688 +0000 UTC m=+846.334474732" lastFinishedPulling="2025-11-21 20:20:43.720411488 +0000 UTC m=+848.906596532" observedRunningTime="2025-11-21 20:20:44.717263744 +0000 UTC m=+849.903448788" watchObservedRunningTime="2025-11-21 20:20:44.73106151 +0000 UTC m=+849.917246574" Nov 21 20:20:44 crc kubenswrapper[4727]: I1121 20:20:44.734201 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-b5lmb" podStartSLOduration=2.559519077 podStartE2EDuration="5.734185558s" podCreationTimestamp="2025-11-21 20:20:39 +0000 UTC" firstStartedPulling="2025-11-21 20:20:40.544625639 +0000 UTC m=+845.730810683" lastFinishedPulling="2025-11-21 20:20:43.71929212 +0000 UTC m=+848.905477164" observedRunningTime="2025-11-21 20:20:44.72902984 +0000 UTC m=+849.915214884" watchObservedRunningTime="2025-11-21 20:20:44.734185558 +0000 UTC m=+849.920370602" Nov 21 20:20:46 crc kubenswrapper[4727]: I1121 20:20:46.715546 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm" event={"ID":"80103952-99f2-44aa-b4e0-17e3329d39b5","Type":"ContainerStarted","Data":"64c7dd27c5c09dc41edafdd7a1a5b5be4cb3f54bf433800477adc6a9e6bc5c57"} Nov 21 20:20:46 crc kubenswrapper[4727]: I1121 20:20:46.747351 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-blwbm" podStartSLOduration=2.395061928 podStartE2EDuration="7.747331318s" podCreationTimestamp="2025-11-21 20:20:39 +0000 UTC" firstStartedPulling="2025-11-21 20:20:41.036890898 +0000 UTC m=+846.223075942" lastFinishedPulling="2025-11-21 20:20:46.389160288 +0000 UTC m=+851.575345332" observedRunningTime="2025-11-21 20:20:46.739505753 +0000 UTC m=+851.925690807" watchObservedRunningTime="2025-11-21 20:20:46.747331318 +0000 UTC m=+851.933516362" Nov 21 20:20:50 crc kubenswrapper[4727]: I1121 20:20:50.451664 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-b5lmb" Nov 21 20:20:50 crc kubenswrapper[4727]: I1121 20:20:50.669219 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:50 crc kubenswrapper[4727]: I1121 20:20:50.669430 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:50 crc kubenswrapper[4727]: I1121 20:20:50.673495 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:50 crc kubenswrapper[4727]: I1121 20:20:50.751734 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:20:50 crc kubenswrapper[4727]: I1121 20:20:50.827597 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5bfbd5755b-phmrq"] Nov 21 20:21:00 crc kubenswrapper[4727]: I1121 20:21:00.392935 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-xbm77" Nov 21 20:21:15 crc kubenswrapper[4727]: I1121 20:21:15.865334 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5bfbd5755b-phmrq" podUID="3606aeab-9115-4b44-9cfb-c9d7ac34e711" containerName="console" containerID="cri-o://57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582" gracePeriod=15 Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.270194 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5bfbd5755b-phmrq_3606aeab-9115-4b44-9cfb-c9d7ac34e711/console/0.log" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.270530 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.383933 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-config\") pod \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.384038 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpjdk\" (UniqueName: \"kubernetes.io/projected/3606aeab-9115-4b44-9cfb-c9d7ac34e711-kube-api-access-bpjdk\") pod \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.384080 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-trusted-ca-bundle\") pod \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.384125 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-serving-cert\") pod \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.384229 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-oauth-config\") pod \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.384265 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-service-ca\") pod \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.384293 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-oauth-serving-cert\") pod \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\" (UID: \"3606aeab-9115-4b44-9cfb-c9d7ac34e711\") " Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.384867 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-config" (OuterVolumeSpecName: "console-config") pod "3606aeab-9115-4b44-9cfb-c9d7ac34e711" (UID: "3606aeab-9115-4b44-9cfb-c9d7ac34e711"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.384909 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3606aeab-9115-4b44-9cfb-c9d7ac34e711" (UID: "3606aeab-9115-4b44-9cfb-c9d7ac34e711"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.384943 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-service-ca" (OuterVolumeSpecName: "service-ca") pod "3606aeab-9115-4b44-9cfb-c9d7ac34e711" (UID: "3606aeab-9115-4b44-9cfb-c9d7ac34e711"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.384978 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3606aeab-9115-4b44-9cfb-c9d7ac34e711" (UID: "3606aeab-9115-4b44-9cfb-c9d7ac34e711"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.390229 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3606aeab-9115-4b44-9cfb-c9d7ac34e711-kube-api-access-bpjdk" (OuterVolumeSpecName: "kube-api-access-bpjdk") pod "3606aeab-9115-4b44-9cfb-c9d7ac34e711" (UID: "3606aeab-9115-4b44-9cfb-c9d7ac34e711"). InnerVolumeSpecName "kube-api-access-bpjdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.390329 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3606aeab-9115-4b44-9cfb-c9d7ac34e711" (UID: "3606aeab-9115-4b44-9cfb-c9d7ac34e711"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.393036 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3606aeab-9115-4b44-9cfb-c9d7ac34e711" (UID: "3606aeab-9115-4b44-9cfb-c9d7ac34e711"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.486169 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.486215 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.486229 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.486241 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.486253 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.486263 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3606aeab-9115-4b44-9cfb-c9d7ac34e711-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.486276 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpjdk\" (UniqueName: \"kubernetes.io/projected/3606aeab-9115-4b44-9cfb-c9d7ac34e711-kube-api-access-bpjdk\") on node \"crc\" DevicePath \"\"" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.952453 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5bfbd5755b-phmrq_3606aeab-9115-4b44-9cfb-c9d7ac34e711/console/0.log" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.952514 4727 generic.go:334] "Generic (PLEG): container finished" podID="3606aeab-9115-4b44-9cfb-c9d7ac34e711" containerID="57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582" exitCode=2 Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.952548 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bfbd5755b-phmrq" event={"ID":"3606aeab-9115-4b44-9cfb-c9d7ac34e711","Type":"ContainerDied","Data":"57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582"} Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.952580 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bfbd5755b-phmrq" event={"ID":"3606aeab-9115-4b44-9cfb-c9d7ac34e711","Type":"ContainerDied","Data":"88165e32d6a70810d696b5f374d5e27d55950d16fb05f260c85ea43ea9fa086f"} Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.952599 4727 scope.go:117] "RemoveContainer" containerID="57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.952598 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bfbd5755b-phmrq" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.984808 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5bfbd5755b-phmrq"] Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.986723 4727 scope.go:117] "RemoveContainer" containerID="57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582" Nov 21 20:21:16 crc kubenswrapper[4727]: E1121 20:21:16.987109 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582\": container with ID starting with 57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582 not found: ID does not exist" containerID="57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.987137 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582"} err="failed to get container status \"57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582\": rpc error: code = NotFound desc = could not find container \"57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582\": container with ID starting with 57086e3860d2faed48290428443e34277dffc02f36fa0b44f1df1de26ee6a582 not found: ID does not exist" Nov 21 20:21:16 crc kubenswrapper[4727]: I1121 20:21:16.991888 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5bfbd5755b-phmrq"] Nov 21 20:21:17 crc kubenswrapper[4727]: I1121 20:21:17.508185 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3606aeab-9115-4b44-9cfb-c9d7ac34e711" path="/var/lib/kubelet/pods/3606aeab-9115-4b44-9cfb-c9d7ac34e711/volumes" Nov 21 20:21:17 crc kubenswrapper[4727]: I1121 20:21:17.758397 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx"] Nov 21 20:21:17 crc kubenswrapper[4727]: E1121 20:21:17.758735 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3606aeab-9115-4b44-9cfb-c9d7ac34e711" containerName="console" Nov 21 20:21:17 crc kubenswrapper[4727]: I1121 20:21:17.758751 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3606aeab-9115-4b44-9cfb-c9d7ac34e711" containerName="console" Nov 21 20:21:17 crc kubenswrapper[4727]: I1121 20:21:17.758895 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3606aeab-9115-4b44-9cfb-c9d7ac34e711" containerName="console" Nov 21 20:21:17 crc kubenswrapper[4727]: I1121 20:21:17.760008 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:17 crc kubenswrapper[4727]: I1121 20:21:17.762824 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 20:21:17 crc kubenswrapper[4727]: I1121 20:21:17.811997 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx"] Nov 21 20:21:17 crc kubenswrapper[4727]: I1121 20:21:17.912185 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wntks\" (UniqueName: \"kubernetes.io/projected/1afc1382-b90a-439e-bd7f-3ee0d42a547f-kube-api-access-wntks\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:17 crc kubenswrapper[4727]: I1121 20:21:17.912254 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:17 crc kubenswrapper[4727]: I1121 20:21:17.912338 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.014230 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wntks\" (UniqueName: \"kubernetes.io/projected/1afc1382-b90a-439e-bd7f-3ee0d42a547f-kube-api-access-wntks\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.014279 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.014331 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.014805 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.014975 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.038004 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wntks\" (UniqueName: \"kubernetes.io/projected/1afc1382-b90a-439e-bd7f-3ee0d42a547f-kube-api-access-wntks\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.084654 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.288810 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx"] Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.968736 4727 generic.go:334] "Generic (PLEG): container finished" podID="1afc1382-b90a-439e-bd7f-3ee0d42a547f" containerID="b86380cbca86df4d7ae21ccb40580062cb270b81c8c11dda9a3f308a6b7b8d86" exitCode=0 Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.968805 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" event={"ID":"1afc1382-b90a-439e-bd7f-3ee0d42a547f","Type":"ContainerDied","Data":"b86380cbca86df4d7ae21ccb40580062cb270b81c8c11dda9a3f308a6b7b8d86"} Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.969113 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" event={"ID":"1afc1382-b90a-439e-bd7f-3ee0d42a547f","Type":"ContainerStarted","Data":"a3503d4d33876bf16adb54e8c04cca676e5224e8429f6a4a6d4a68b1b7e8aa4b"} Nov 21 20:21:18 crc kubenswrapper[4727]: I1121 20:21:18.971598 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 20:21:20 crc kubenswrapper[4727]: I1121 20:21:20.987813 4727 generic.go:334] "Generic (PLEG): container finished" podID="1afc1382-b90a-439e-bd7f-3ee0d42a547f" containerID="f6addbfbc7cf771b4e0e14d441e67834c2b3a5d801d80b6fbf8ea152b8d5a8dc" exitCode=0 Nov 21 20:21:20 crc kubenswrapper[4727]: I1121 20:21:20.987935 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" event={"ID":"1afc1382-b90a-439e-bd7f-3ee0d42a547f","Type":"ContainerDied","Data":"f6addbfbc7cf771b4e0e14d441e67834c2b3a5d801d80b6fbf8ea152b8d5a8dc"} Nov 21 20:21:22 crc kubenswrapper[4727]: I1121 20:21:22.028185 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" event={"ID":"1afc1382-b90a-439e-bd7f-3ee0d42a547f","Type":"ContainerDied","Data":"3ccfdb8102756600200f918f590cc31d1b5b9f6e6773c1edfe29ab2262c599e4"} Nov 21 20:21:22 crc kubenswrapper[4727]: I1121 20:21:22.028428 4727 generic.go:334] "Generic (PLEG): container finished" podID="1afc1382-b90a-439e-bd7f-3ee0d42a547f" containerID="3ccfdb8102756600200f918f590cc31d1b5b9f6e6773c1edfe29ab2262c599e4" exitCode=0 Nov 21 20:21:23 crc kubenswrapper[4727]: I1121 20:21:23.383556 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:23 crc kubenswrapper[4727]: I1121 20:21:23.510084 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-util\") pod \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " Nov 21 20:21:23 crc kubenswrapper[4727]: I1121 20:21:23.510251 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-bundle\") pod \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " Nov 21 20:21:23 crc kubenswrapper[4727]: I1121 20:21:23.510411 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wntks\" (UniqueName: \"kubernetes.io/projected/1afc1382-b90a-439e-bd7f-3ee0d42a547f-kube-api-access-wntks\") pod \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\" (UID: \"1afc1382-b90a-439e-bd7f-3ee0d42a547f\") " Nov 21 20:21:23 crc kubenswrapper[4727]: I1121 20:21:23.512426 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-bundle" (OuterVolumeSpecName: "bundle") pod "1afc1382-b90a-439e-bd7f-3ee0d42a547f" (UID: "1afc1382-b90a-439e-bd7f-3ee0d42a547f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:21:23 crc kubenswrapper[4727]: I1121 20:21:23.517833 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afc1382-b90a-439e-bd7f-3ee0d42a547f-kube-api-access-wntks" (OuterVolumeSpecName: "kube-api-access-wntks") pod "1afc1382-b90a-439e-bd7f-3ee0d42a547f" (UID: "1afc1382-b90a-439e-bd7f-3ee0d42a547f"). InnerVolumeSpecName "kube-api-access-wntks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:21:23 crc kubenswrapper[4727]: I1121 20:21:23.532198 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-util" (OuterVolumeSpecName: "util") pod "1afc1382-b90a-439e-bd7f-3ee0d42a547f" (UID: "1afc1382-b90a-439e-bd7f-3ee0d42a547f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:21:23 crc kubenswrapper[4727]: I1121 20:21:23.612961 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-util\") on node \"crc\" DevicePath \"\"" Nov 21 20:21:23 crc kubenswrapper[4727]: I1121 20:21:23.613044 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1afc1382-b90a-439e-bd7f-3ee0d42a547f-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:21:23 crc kubenswrapper[4727]: I1121 20:21:23.613060 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wntks\" (UniqueName: \"kubernetes.io/projected/1afc1382-b90a-439e-bd7f-3ee0d42a547f-kube-api-access-wntks\") on node \"crc\" DevicePath \"\"" Nov 21 20:21:24 crc kubenswrapper[4727]: I1121 20:21:24.045468 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" event={"ID":"1afc1382-b90a-439e-bd7f-3ee0d42a547f","Type":"ContainerDied","Data":"a3503d4d33876bf16adb54e8c04cca676e5224e8429f6a4a6d4a68b1b7e8aa4b"} Nov 21 20:21:24 crc kubenswrapper[4727]: I1121 20:21:24.045545 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3503d4d33876bf16adb54e8c04cca676e5224e8429f6a4a6d4a68b1b7e8aa4b" Nov 21 20:21:24 crc kubenswrapper[4727]: I1121 20:21:24.045890 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.851565 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2"] Nov 21 20:21:35 crc kubenswrapper[4727]: E1121 20:21:35.852803 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afc1382-b90a-439e-bd7f-3ee0d42a547f" containerName="extract" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.852822 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afc1382-b90a-439e-bd7f-3ee0d42a547f" containerName="extract" Nov 21 20:21:35 crc kubenswrapper[4727]: E1121 20:21:35.852838 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afc1382-b90a-439e-bd7f-3ee0d42a547f" containerName="pull" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.852846 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afc1382-b90a-439e-bd7f-3ee0d42a547f" containerName="pull" Nov 21 20:21:35 crc kubenswrapper[4727]: E1121 20:21:35.852863 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afc1382-b90a-439e-bd7f-3ee0d42a547f" containerName="util" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.852871 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afc1382-b90a-439e-bd7f-3ee0d42a547f" containerName="util" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.853095 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afc1382-b90a-439e-bd7f-3ee0d42a547f" containerName="extract" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.853822 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.855667 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.855743 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.856040 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xwn8p" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.856100 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.857252 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.882548 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2"] Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.894429 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmnvc\" (UniqueName: \"kubernetes.io/projected/6764728f-f2fe-4017-99bf-6278910f9fc8-kube-api-access-lmnvc\") pod \"metallb-operator-controller-manager-7d55dd5c97-pxmw2\" (UID: \"6764728f-f2fe-4017-99bf-6278910f9fc8\") " pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.894701 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6764728f-f2fe-4017-99bf-6278910f9fc8-webhook-cert\") pod \"metallb-operator-controller-manager-7d55dd5c97-pxmw2\" (UID: \"6764728f-f2fe-4017-99bf-6278910f9fc8\") " pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.894760 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6764728f-f2fe-4017-99bf-6278910f9fc8-apiservice-cert\") pod \"metallb-operator-controller-manager-7d55dd5c97-pxmw2\" (UID: \"6764728f-f2fe-4017-99bf-6278910f9fc8\") " pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.997030 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmnvc\" (UniqueName: \"kubernetes.io/projected/6764728f-f2fe-4017-99bf-6278910f9fc8-kube-api-access-lmnvc\") pod \"metallb-operator-controller-manager-7d55dd5c97-pxmw2\" (UID: \"6764728f-f2fe-4017-99bf-6278910f9fc8\") " pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.997209 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6764728f-f2fe-4017-99bf-6278910f9fc8-webhook-cert\") pod \"metallb-operator-controller-manager-7d55dd5c97-pxmw2\" (UID: \"6764728f-f2fe-4017-99bf-6278910f9fc8\") " pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:35 crc kubenswrapper[4727]: I1121 20:21:35.998248 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6764728f-f2fe-4017-99bf-6278910f9fc8-apiservice-cert\") pod \"metallb-operator-controller-manager-7d55dd5c97-pxmw2\" (UID: \"6764728f-f2fe-4017-99bf-6278910f9fc8\") " pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.004903 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6764728f-f2fe-4017-99bf-6278910f9fc8-apiservice-cert\") pod \"metallb-operator-controller-manager-7d55dd5c97-pxmw2\" (UID: \"6764728f-f2fe-4017-99bf-6278910f9fc8\") " pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.013610 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6764728f-f2fe-4017-99bf-6278910f9fc8-webhook-cert\") pod \"metallb-operator-controller-manager-7d55dd5c97-pxmw2\" (UID: \"6764728f-f2fe-4017-99bf-6278910f9fc8\") " pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.017034 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmnvc\" (UniqueName: \"kubernetes.io/projected/6764728f-f2fe-4017-99bf-6278910f9fc8-kube-api-access-lmnvc\") pod \"metallb-operator-controller-manager-7d55dd5c97-pxmw2\" (UID: \"6764728f-f2fe-4017-99bf-6278910f9fc8\") " pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.090684 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh"] Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.091788 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.103344 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.103588 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.106175 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-k27p6" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.146077 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh"] Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.170530 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.207244 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aee0f3a1-6283-4b18-ad30-00cae510da18-apiservice-cert\") pod \"metallb-operator-webhook-server-795fdb5c8f-4ffhh\" (UID: \"aee0f3a1-6283-4b18-ad30-00cae510da18\") " pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.207332 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6mwr\" (UniqueName: \"kubernetes.io/projected/aee0f3a1-6283-4b18-ad30-00cae510da18-kube-api-access-b6mwr\") pod \"metallb-operator-webhook-server-795fdb5c8f-4ffhh\" (UID: \"aee0f3a1-6283-4b18-ad30-00cae510da18\") " pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.207391 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aee0f3a1-6283-4b18-ad30-00cae510da18-webhook-cert\") pod \"metallb-operator-webhook-server-795fdb5c8f-4ffhh\" (UID: \"aee0f3a1-6283-4b18-ad30-00cae510da18\") " pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.308722 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aee0f3a1-6283-4b18-ad30-00cae510da18-apiservice-cert\") pod \"metallb-operator-webhook-server-795fdb5c8f-4ffhh\" (UID: \"aee0f3a1-6283-4b18-ad30-00cae510da18\") " pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.309176 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6mwr\" (UniqueName: \"kubernetes.io/projected/aee0f3a1-6283-4b18-ad30-00cae510da18-kube-api-access-b6mwr\") pod \"metallb-operator-webhook-server-795fdb5c8f-4ffhh\" (UID: \"aee0f3a1-6283-4b18-ad30-00cae510da18\") " pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.309230 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aee0f3a1-6283-4b18-ad30-00cae510da18-webhook-cert\") pod \"metallb-operator-webhook-server-795fdb5c8f-4ffhh\" (UID: \"aee0f3a1-6283-4b18-ad30-00cae510da18\") " pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.316675 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aee0f3a1-6283-4b18-ad30-00cae510da18-apiservice-cert\") pod \"metallb-operator-webhook-server-795fdb5c8f-4ffhh\" (UID: \"aee0f3a1-6283-4b18-ad30-00cae510da18\") " pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.327399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aee0f3a1-6283-4b18-ad30-00cae510da18-webhook-cert\") pod \"metallb-operator-webhook-server-795fdb5c8f-4ffhh\" (UID: \"aee0f3a1-6283-4b18-ad30-00cae510da18\") " pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.330337 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6mwr\" (UniqueName: \"kubernetes.io/projected/aee0f3a1-6283-4b18-ad30-00cae510da18-kube-api-access-b6mwr\") pod \"metallb-operator-webhook-server-795fdb5c8f-4ffhh\" (UID: \"aee0f3a1-6283-4b18-ad30-00cae510da18\") " pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.420372 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:36 crc kubenswrapper[4727]: I1121 20:21:36.663596 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2"] Nov 21 20:21:37 crc kubenswrapper[4727]: I1121 20:21:37.062999 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh"] Nov 21 20:21:37 crc kubenswrapper[4727]: W1121 20:21:37.066720 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaee0f3a1_6283_4b18_ad30_00cae510da18.slice/crio-a1003a75d9114b0797979acb709fd726fe36ddb2decac15e3a430d12d82571c2 WatchSource:0}: Error finding container a1003a75d9114b0797979acb709fd726fe36ddb2decac15e3a430d12d82571c2: Status 404 returned error can't find the container with id a1003a75d9114b0797979acb709fd726fe36ddb2decac15e3a430d12d82571c2 Nov 21 20:21:37 crc kubenswrapper[4727]: I1121 20:21:37.152752 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" event={"ID":"aee0f3a1-6283-4b18-ad30-00cae510da18","Type":"ContainerStarted","Data":"a1003a75d9114b0797979acb709fd726fe36ddb2decac15e3a430d12d82571c2"} Nov 21 20:21:37 crc kubenswrapper[4727]: I1121 20:21:37.154793 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" event={"ID":"6764728f-f2fe-4017-99bf-6278910f9fc8","Type":"ContainerStarted","Data":"b6c83a91cd32a76b7052b85c36dc236664a1d368f57f84be3f8b8f563eed0ec4"} Nov 21 20:21:41 crc kubenswrapper[4727]: I1121 20:21:41.201139 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" event={"ID":"6764728f-f2fe-4017-99bf-6278910f9fc8","Type":"ContainerStarted","Data":"910c133994f8a05f980f859c7171ebb8f28baf18e3f39cfcfdcd4a6ad536e9b8"} Nov 21 20:21:41 crc kubenswrapper[4727]: I1121 20:21:41.201722 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:21:41 crc kubenswrapper[4727]: I1121 20:21:41.227553 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" podStartSLOduration=2.562448395 podStartE2EDuration="6.22753729s" podCreationTimestamp="2025-11-21 20:21:35 +0000 UTC" firstStartedPulling="2025-11-21 20:21:36.690654031 +0000 UTC m=+901.876839075" lastFinishedPulling="2025-11-21 20:21:40.355742926 +0000 UTC m=+905.541927970" observedRunningTime="2025-11-21 20:21:41.223363466 +0000 UTC m=+906.409548520" watchObservedRunningTime="2025-11-21 20:21:41.22753729 +0000 UTC m=+906.413722324" Nov 21 20:21:43 crc kubenswrapper[4727]: I1121 20:21:43.229859 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" event={"ID":"aee0f3a1-6283-4b18-ad30-00cae510da18","Type":"ContainerStarted","Data":"54808e786877fdf998ac0455f5b20f300a19ec1dabd95b86eba0bcac224839e1"} Nov 21 20:21:43 crc kubenswrapper[4727]: I1121 20:21:43.230647 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:21:43 crc kubenswrapper[4727]: I1121 20:21:43.256797 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" podStartSLOduration=2.042089209 podStartE2EDuration="7.256781404s" podCreationTimestamp="2025-11-21 20:21:36 +0000 UTC" firstStartedPulling="2025-11-21 20:21:37.068986117 +0000 UTC m=+902.255171161" lastFinishedPulling="2025-11-21 20:21:42.283678312 +0000 UTC m=+907.469863356" observedRunningTime="2025-11-21 20:21:43.254490576 +0000 UTC m=+908.440675620" watchObservedRunningTime="2025-11-21 20:21:43.256781404 +0000 UTC m=+908.442966448" Nov 21 20:21:56 crc kubenswrapper[4727]: I1121 20:21:56.425986 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-795fdb5c8f-4ffhh" Nov 21 20:22:13 crc kubenswrapper[4727]: I1121 20:22:13.335604 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:22:13 crc kubenswrapper[4727]: I1121 20:22:13.336425 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.173083 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7d55dd5c97-pxmw2" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.853741 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pv4hs"] Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.856759 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.859384 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.859945 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.860992 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qmd6j" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.878582 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j"] Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.879914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.881948 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.890162 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j"] Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.963300 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pf24d"] Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.964677 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pf24d" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.967599 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.967929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zq4pr" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.967930 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/81eccf53-ca42-4d0f-b967-1da15a5d817d-frr-startup\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.968067 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.968121 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7h7r\" (UniqueName: \"kubernetes.io/projected/81eccf53-ca42-4d0f-b967-1da15a5d817d-kube-api-access-s7h7r\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.968270 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.968267 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81eccf53-ca42-4d0f-b967-1da15a5d817d-metrics-certs\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.968783 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-frr-conf\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.968882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-reloader\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.968992 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-metrics\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.969070 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-frr-sockets\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.975622 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-zjjlc"] Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.977109 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.982582 4727 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 21 20:22:16 crc kubenswrapper[4727]: I1121 20:22:16.992891 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-zjjlc"] Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.070209 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/81eccf53-ca42-4d0f-b967-1da15a5d817d-frr-startup\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.070543 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7h7r\" (UniqueName: \"kubernetes.io/projected/81eccf53-ca42-4d0f-b967-1da15a5d817d-kube-api-access-s7h7r\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.070647 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-metrics-certs\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.070750 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bbf29db0-6bed-47d7-af92-d4ab37fb4909-metallb-excludel2\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.070869 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-memberlist\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.071053 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81eccf53-ca42-4d0f-b967-1da15a5d817d-metrics-certs\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.071205 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/330841f1-7983-496c-8fd8-b1f2aa8f286f-cert\") pod \"frr-k8s-webhook-server-6998585d5-2nz5j\" (UID: \"330841f1-7983-496c-8fd8-b1f2aa8f286f\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.071327 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p22l2\" (UniqueName: \"kubernetes.io/projected/330841f1-7983-496c-8fd8-b1f2aa8f286f-kube-api-access-p22l2\") pod \"frr-k8s-webhook-server-6998585d5-2nz5j\" (UID: \"330841f1-7983-496c-8fd8-b1f2aa8f286f\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:17 crc kubenswrapper[4727]: E1121 20:22:17.071131 4727 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 21 20:22:17 crc kubenswrapper[4727]: E1121 20:22:17.071538 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81eccf53-ca42-4d0f-b967-1da15a5d817d-metrics-certs podName:81eccf53-ca42-4d0f-b967-1da15a5d817d nodeName:}" failed. No retries permitted until 2025-11-21 20:22:17.571517776 +0000 UTC m=+942.757702820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81eccf53-ca42-4d0f-b967-1da15a5d817d-metrics-certs") pod "frr-k8s-pv4hs" (UID: "81eccf53-ca42-4d0f-b967-1da15a5d817d") : secret "frr-k8s-certs-secret" not found Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.071458 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-reloader\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.071583 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-frr-conf\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.071671 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-metrics\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.071735 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-frr-sockets\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.071806 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7zq\" (UniqueName: \"kubernetes.io/projected/bbf29db0-6bed-47d7-af92-d4ab37fb4909-kube-api-access-zz7zq\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.071979 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-frr-conf\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.071265 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/81eccf53-ca42-4d0f-b967-1da15a5d817d-frr-startup\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.072402 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-frr-sockets\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.072409 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-reloader\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.072533 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/81eccf53-ca42-4d0f-b967-1da15a5d817d-metrics\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.100185 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7h7r\" (UniqueName: \"kubernetes.io/projected/81eccf53-ca42-4d0f-b967-1da15a5d817d-kube-api-access-s7h7r\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.173009 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-metrics-certs\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.173316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bbf29db0-6bed-47d7-af92-d4ab37fb4909-metallb-excludel2\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.173343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-memberlist\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: E1121 20:22:17.173453 4727 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.174175 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bbf29db0-6bed-47d7-af92-d4ab37fb4909-metallb-excludel2\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: E1121 20:22:17.174222 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-memberlist podName:bbf29db0-6bed-47d7-af92-d4ab37fb4909 nodeName:}" failed. No retries permitted until 2025-11-21 20:22:17.673487379 +0000 UTC m=+942.859672413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-memberlist") pod "speaker-pf24d" (UID: "bbf29db0-6bed-47d7-af92-d4ab37fb4909") : secret "metallb-memberlist" not found Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.174269 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/330841f1-7983-496c-8fd8-b1f2aa8f286f-cert\") pod \"frr-k8s-webhook-server-6998585d5-2nz5j\" (UID: \"330841f1-7983-496c-8fd8-b1f2aa8f286f\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.174296 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p22l2\" (UniqueName: \"kubernetes.io/projected/330841f1-7983-496c-8fd8-b1f2aa8f286f-kube-api-access-p22l2\") pod \"frr-k8s-webhook-server-6998585d5-2nz5j\" (UID: \"330841f1-7983-496c-8fd8-b1f2aa8f286f\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.174351 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fb283b2-30a9-4708-85e5-3e062e8d3ac5-cert\") pod \"controller-6c7b4b5f48-zjjlc\" (UID: \"4fb283b2-30a9-4708-85e5-3e062e8d3ac5\") " pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.174380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7zq\" (UniqueName: \"kubernetes.io/projected/bbf29db0-6bed-47d7-af92-d4ab37fb4909-kube-api-access-zz7zq\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.174401 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hjj6\" (UniqueName: \"kubernetes.io/projected/4fb283b2-30a9-4708-85e5-3e062e8d3ac5-kube-api-access-8hjj6\") pod \"controller-6c7b4b5f48-zjjlc\" (UID: \"4fb283b2-30a9-4708-85e5-3e062e8d3ac5\") " pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.174424 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fb283b2-30a9-4708-85e5-3e062e8d3ac5-metrics-certs\") pod \"controller-6c7b4b5f48-zjjlc\" (UID: \"4fb283b2-30a9-4708-85e5-3e062e8d3ac5\") " pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:17 crc kubenswrapper[4727]: E1121 20:22:17.174562 4727 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 21 20:22:17 crc kubenswrapper[4727]: E1121 20:22:17.174598 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/330841f1-7983-496c-8fd8-b1f2aa8f286f-cert podName:330841f1-7983-496c-8fd8-b1f2aa8f286f nodeName:}" failed. No retries permitted until 2025-11-21 20:22:17.674591836 +0000 UTC m=+942.860776880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/330841f1-7983-496c-8fd8-b1f2aa8f286f-cert") pod "frr-k8s-webhook-server-6998585d5-2nz5j" (UID: "330841f1-7983-496c-8fd8-b1f2aa8f286f") : secret "frr-k8s-webhook-server-cert" not found Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.179733 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-metrics-certs\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.193917 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p22l2\" (UniqueName: \"kubernetes.io/projected/330841f1-7983-496c-8fd8-b1f2aa8f286f-kube-api-access-p22l2\") pod \"frr-k8s-webhook-server-6998585d5-2nz5j\" (UID: \"330841f1-7983-496c-8fd8-b1f2aa8f286f\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.194468 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7zq\" (UniqueName: \"kubernetes.io/projected/bbf29db0-6bed-47d7-af92-d4ab37fb4909-kube-api-access-zz7zq\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.275738 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fb283b2-30a9-4708-85e5-3e062e8d3ac5-cert\") pod \"controller-6c7b4b5f48-zjjlc\" (UID: \"4fb283b2-30a9-4708-85e5-3e062e8d3ac5\") " pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.276292 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hjj6\" (UniqueName: \"kubernetes.io/projected/4fb283b2-30a9-4708-85e5-3e062e8d3ac5-kube-api-access-8hjj6\") pod \"controller-6c7b4b5f48-zjjlc\" (UID: \"4fb283b2-30a9-4708-85e5-3e062e8d3ac5\") " pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.276319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fb283b2-30a9-4708-85e5-3e062e8d3ac5-metrics-certs\") pod \"controller-6c7b4b5f48-zjjlc\" (UID: \"4fb283b2-30a9-4708-85e5-3e062e8d3ac5\") " pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.280338 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fb283b2-30a9-4708-85e5-3e062e8d3ac5-cert\") pod \"controller-6c7b4b5f48-zjjlc\" (UID: \"4fb283b2-30a9-4708-85e5-3e062e8d3ac5\") " pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.285554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fb283b2-30a9-4708-85e5-3e062e8d3ac5-metrics-certs\") pod \"controller-6c7b4b5f48-zjjlc\" (UID: \"4fb283b2-30a9-4708-85e5-3e062e8d3ac5\") " pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.305863 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hjj6\" (UniqueName: \"kubernetes.io/projected/4fb283b2-30a9-4708-85e5-3e062e8d3ac5-kube-api-access-8hjj6\") pod \"controller-6c7b4b5f48-zjjlc\" (UID: \"4fb283b2-30a9-4708-85e5-3e062e8d3ac5\") " pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.580944 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81eccf53-ca42-4d0f-b967-1da15a5d817d-metrics-certs\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.584565 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81eccf53-ca42-4d0f-b967-1da15a5d817d-metrics-certs\") pod \"frr-k8s-pv4hs\" (UID: \"81eccf53-ca42-4d0f-b967-1da15a5d817d\") " pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.596021 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.683948 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-memberlist\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.684289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/330841f1-7983-496c-8fd8-b1f2aa8f286f-cert\") pod \"frr-k8s-webhook-server-6998585d5-2nz5j\" (UID: \"330841f1-7983-496c-8fd8-b1f2aa8f286f\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:17 crc kubenswrapper[4727]: E1121 20:22:17.684157 4727 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 21 20:22:17 crc kubenswrapper[4727]: E1121 20:22:17.684426 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-memberlist podName:bbf29db0-6bed-47d7-af92-d4ab37fb4909 nodeName:}" failed. No retries permitted until 2025-11-21 20:22:18.684398501 +0000 UTC m=+943.870583615 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-memberlist") pod "speaker-pf24d" (UID: "bbf29db0-6bed-47d7-af92-d4ab37fb4909") : secret "metallb-memberlist" not found Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.687712 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/330841f1-7983-496c-8fd8-b1f2aa8f286f-cert\") pod \"frr-k8s-webhook-server-6998585d5-2nz5j\" (UID: \"330841f1-7983-496c-8fd8-b1f2aa8f286f\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.781378 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:17 crc kubenswrapper[4727]: I1121 20:22:17.804918 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.004240 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-zjjlc"] Nov 21 20:22:18 crc kubenswrapper[4727]: W1121 20:22:18.012641 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fb283b2_30a9_4708_85e5_3e062e8d3ac5.slice/crio-593774b06496ac63cffd5c8bb42136e39037b4e9ebcefa0bfd91ea9f780dd2a9 WatchSource:0}: Error finding container 593774b06496ac63cffd5c8bb42136e39037b4e9ebcefa0bfd91ea9f780dd2a9: Status 404 returned error can't find the container with id 593774b06496ac63cffd5c8bb42136e39037b4e9ebcefa0bfd91ea9f780dd2a9 Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.232940 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j"] Nov 21 20:22:18 crc kubenswrapper[4727]: W1121 20:22:18.242131 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod330841f1_7983_496c_8fd8_b1f2aa8f286f.slice/crio-a4bad6e6e66dc17d2bb4d343a68a894069fc787e5f96d4999d7b158869e0b4dd WatchSource:0}: Error finding container a4bad6e6e66dc17d2bb4d343a68a894069fc787e5f96d4999d7b158869e0b4dd: Status 404 returned error can't find the container with id a4bad6e6e66dc17d2bb4d343a68a894069fc787e5f96d4999d7b158869e0b4dd Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.493160 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-zjjlc" event={"ID":"4fb283b2-30a9-4708-85e5-3e062e8d3ac5","Type":"ContainerStarted","Data":"27b5a75b47cafe7360502558bc6e6b482cd1e4939c27618edf01e9048ce6e7a3"} Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.493221 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-zjjlc" event={"ID":"4fb283b2-30a9-4708-85e5-3e062e8d3ac5","Type":"ContainerStarted","Data":"96206c08e019488ca25d021053bb9c60f74227fc9b2d6fbd70c0ddb66403b8d6"} Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.493235 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-zjjlc" event={"ID":"4fb283b2-30a9-4708-85e5-3e062e8d3ac5","Type":"ContainerStarted","Data":"593774b06496ac63cffd5c8bb42136e39037b4e9ebcefa0bfd91ea9f780dd2a9"} Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.493275 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.494182 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv4hs" event={"ID":"81eccf53-ca42-4d0f-b967-1da15a5d817d","Type":"ContainerStarted","Data":"122327d1e3742e6d4f9ad74cddc0b55c13e4dc4c735a8ed431c34f582a423cf6"} Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.495278 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" event={"ID":"330841f1-7983-496c-8fd8-b1f2aa8f286f","Type":"ContainerStarted","Data":"a4bad6e6e66dc17d2bb4d343a68a894069fc787e5f96d4999d7b158869e0b4dd"} Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.514355 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-zjjlc" podStartSLOduration=2.514333014 podStartE2EDuration="2.514333014s" podCreationTimestamp="2025-11-21 20:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:22:18.512332785 +0000 UTC m=+943.698517829" watchObservedRunningTime="2025-11-21 20:22:18.514333014 +0000 UTC m=+943.700518058" Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.710474 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-memberlist\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.720685 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bbf29db0-6bed-47d7-af92-d4ab37fb4909-memberlist\") pod \"speaker-pf24d\" (UID: \"bbf29db0-6bed-47d7-af92-d4ab37fb4909\") " pod="metallb-system/speaker-pf24d" Nov 21 20:22:18 crc kubenswrapper[4727]: I1121 20:22:18.784917 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pf24d" Nov 21 20:22:18 crc kubenswrapper[4727]: W1121 20:22:18.805245 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbf29db0_6bed_47d7_af92_d4ab37fb4909.slice/crio-d6aa9843de9ab3b2ffe406aaaf924398a69c4763b700e1f25ea40b7146b2f713 WatchSource:0}: Error finding container d6aa9843de9ab3b2ffe406aaaf924398a69c4763b700e1f25ea40b7146b2f713: Status 404 returned error can't find the container with id d6aa9843de9ab3b2ffe406aaaf924398a69c4763b700e1f25ea40b7146b2f713 Nov 21 20:22:19 crc kubenswrapper[4727]: I1121 20:22:19.517275 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pf24d" event={"ID":"bbf29db0-6bed-47d7-af92-d4ab37fb4909","Type":"ContainerStarted","Data":"462bcf1686c8c7085cc4a3c59396c51fc41fbe8c969f7b3799fc51f6cbdd3603"} Nov 21 20:22:19 crc kubenswrapper[4727]: I1121 20:22:19.517702 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pf24d" event={"ID":"bbf29db0-6bed-47d7-af92-d4ab37fb4909","Type":"ContainerStarted","Data":"580a5c74f4d6baa68575280a2fe0470c48ede15bd2128a4d50c0b497be671503"} Nov 21 20:22:19 crc kubenswrapper[4727]: I1121 20:22:19.517714 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pf24d" event={"ID":"bbf29db0-6bed-47d7-af92-d4ab37fb4909","Type":"ContainerStarted","Data":"d6aa9843de9ab3b2ffe406aaaf924398a69c4763b700e1f25ea40b7146b2f713"} Nov 21 20:22:19 crc kubenswrapper[4727]: I1121 20:22:19.518072 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pf24d" Nov 21 20:22:19 crc kubenswrapper[4727]: I1121 20:22:19.556053 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pf24d" podStartSLOduration=3.555951032 podStartE2EDuration="3.555951032s" podCreationTimestamp="2025-11-21 20:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:22:19.547072575 +0000 UTC m=+944.733257619" watchObservedRunningTime="2025-11-21 20:22:19.555951032 +0000 UTC m=+944.742136076" Nov 21 20:22:25 crc kubenswrapper[4727]: I1121 20:22:25.961068 4727 generic.go:334] "Generic (PLEG): container finished" podID="81eccf53-ca42-4d0f-b967-1da15a5d817d" containerID="f31631c9a3ce53c9310f88e0bb0f5c271df15c19a515073568f9a9488450a944" exitCode=0 Nov 21 20:22:25 crc kubenswrapper[4727]: I1121 20:22:25.961167 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv4hs" event={"ID":"81eccf53-ca42-4d0f-b967-1da15a5d817d","Type":"ContainerDied","Data":"f31631c9a3ce53c9310f88e0bb0f5c271df15c19a515073568f9a9488450a944"} Nov 21 20:22:25 crc kubenswrapper[4727]: I1121 20:22:25.965134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" event={"ID":"330841f1-7983-496c-8fd8-b1f2aa8f286f","Type":"ContainerStarted","Data":"48ad2bd10d79ee2bb2c960f0d41548ecf31b953a70018fbcca7786984821b4e0"} Nov 21 20:22:25 crc kubenswrapper[4727]: I1121 20:22:25.965335 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:26 crc kubenswrapper[4727]: I1121 20:22:26.031435 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" podStartSLOduration=2.707473156 podStartE2EDuration="10.031417472s" podCreationTimestamp="2025-11-21 20:22:16 +0000 UTC" firstStartedPulling="2025-11-21 20:22:18.24383969 +0000 UTC m=+943.430024734" lastFinishedPulling="2025-11-21 20:22:25.567783996 +0000 UTC m=+950.753969050" observedRunningTime="2025-11-21 20:22:26.028231224 +0000 UTC m=+951.214416268" watchObservedRunningTime="2025-11-21 20:22:26.031417472 +0000 UTC m=+951.217602516" Nov 21 20:22:26 crc kubenswrapper[4727]: I1121 20:22:26.974744 4727 generic.go:334] "Generic (PLEG): container finished" podID="81eccf53-ca42-4d0f-b967-1da15a5d817d" containerID="5cbae6450b655cad8f1d83f5f1af7e0270e5af76c8e3848eb6b35815937eccf8" exitCode=0 Nov 21 20:22:26 crc kubenswrapper[4727]: I1121 20:22:26.975195 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv4hs" event={"ID":"81eccf53-ca42-4d0f-b967-1da15a5d817d","Type":"ContainerDied","Data":"5cbae6450b655cad8f1d83f5f1af7e0270e5af76c8e3848eb6b35815937eccf8"} Nov 21 20:22:27 crc kubenswrapper[4727]: I1121 20:22:27.988531 4727 generic.go:334] "Generic (PLEG): container finished" podID="81eccf53-ca42-4d0f-b967-1da15a5d817d" containerID="7d0d9d9b4dd75d45caf24886f5e7d01dfb547afdb1cdae70c9052206a4fe9dd5" exitCode=0 Nov 21 20:22:27 crc kubenswrapper[4727]: I1121 20:22:27.988610 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv4hs" event={"ID":"81eccf53-ca42-4d0f-b967-1da15a5d817d","Type":"ContainerDied","Data":"7d0d9d9b4dd75d45caf24886f5e7d01dfb547afdb1cdae70c9052206a4fe9dd5"} Nov 21 20:22:28 crc kubenswrapper[4727]: I1121 20:22:28.999341 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv4hs" event={"ID":"81eccf53-ca42-4d0f-b967-1da15a5d817d","Type":"ContainerStarted","Data":"a2b088469695e8a9036b7dd434cb76f363c9186bd3731f62c01376b44642f369"} Nov 21 20:22:29 crc kubenswrapper[4727]: I1121 20:22:28.999862 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv4hs" event={"ID":"81eccf53-ca42-4d0f-b967-1da15a5d817d","Type":"ContainerStarted","Data":"45196034d79b94975d2f57b58e097e39f1d6a4f5bb7ade11191bc3599257ea1d"} Nov 21 20:22:30 crc kubenswrapper[4727]: I1121 20:22:30.018645 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv4hs" event={"ID":"81eccf53-ca42-4d0f-b967-1da15a5d817d","Type":"ContainerStarted","Data":"fae2be296a262fb20afa30a118a2630563904f82331e4bfe7a953d2da14abd7a"} Nov 21 20:22:30 crc kubenswrapper[4727]: I1121 20:22:30.019055 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:30 crc kubenswrapper[4727]: I1121 20:22:30.019072 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv4hs" event={"ID":"81eccf53-ca42-4d0f-b967-1da15a5d817d","Type":"ContainerStarted","Data":"579f2275e08db69e84d920453e61886dab97330544fc65cb67ae921984d03b0f"} Nov 21 20:22:30 crc kubenswrapper[4727]: I1121 20:22:30.019111 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv4hs" event={"ID":"81eccf53-ca42-4d0f-b967-1da15a5d817d","Type":"ContainerStarted","Data":"b55abc62ed8d360b8ddbb3b043fa645a62eaffcaaa01052b7034c2f639351643"} Nov 21 20:22:30 crc kubenswrapper[4727]: I1121 20:22:30.019124 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv4hs" event={"ID":"81eccf53-ca42-4d0f-b967-1da15a5d817d","Type":"ContainerStarted","Data":"033c19067d25dcecec40a74ea4879fe7ff43ef81078bd6450a91fbf1fa32e0fa"} Nov 21 20:22:30 crc kubenswrapper[4727]: I1121 20:22:30.053031 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pv4hs" podStartSLOduration=6.396894435 podStartE2EDuration="14.053009653s" podCreationTimestamp="2025-11-21 20:22:16 +0000 UTC" firstStartedPulling="2025-11-21 20:22:17.929941995 +0000 UTC m=+943.116127039" lastFinishedPulling="2025-11-21 20:22:25.586057213 +0000 UTC m=+950.772242257" observedRunningTime="2025-11-21 20:22:30.04715176 +0000 UTC m=+955.233336814" watchObservedRunningTime="2025-11-21 20:22:30.053009653 +0000 UTC m=+955.239194697" Nov 21 20:22:32 crc kubenswrapper[4727]: I1121 20:22:32.782256 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:32 crc kubenswrapper[4727]: I1121 20:22:32.823879 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:37 crc kubenswrapper[4727]: I1121 20:22:37.600759 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-zjjlc" Nov 21 20:22:37 crc kubenswrapper[4727]: I1121 20:22:37.815623 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2nz5j" Nov 21 20:22:38 crc kubenswrapper[4727]: I1121 20:22:38.787999 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pf24d" Nov 21 20:22:41 crc kubenswrapper[4727]: I1121 20:22:41.622222 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2b8zx"] Nov 21 20:22:41 crc kubenswrapper[4727]: I1121 20:22:41.630122 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2b8zx" Nov 21 20:22:41 crc kubenswrapper[4727]: I1121 20:22:41.638210 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-z6cjg" Nov 21 20:22:41 crc kubenswrapper[4727]: I1121 20:22:41.638574 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 21 20:22:41 crc kubenswrapper[4727]: I1121 20:22:41.642455 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2b8zx"] Nov 21 20:22:41 crc kubenswrapper[4727]: I1121 20:22:41.644126 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 21 20:22:41 crc kubenswrapper[4727]: I1121 20:22:41.731075 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrwh\" (UniqueName: \"kubernetes.io/projected/c726a782-07ef-4215-8b41-a06b0e59e5a6-kube-api-access-kcrwh\") pod \"openstack-operator-index-2b8zx\" (UID: \"c726a782-07ef-4215-8b41-a06b0e59e5a6\") " pod="openstack-operators/openstack-operator-index-2b8zx" Nov 21 20:22:41 crc kubenswrapper[4727]: I1121 20:22:41.832763 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcrwh\" (UniqueName: \"kubernetes.io/projected/c726a782-07ef-4215-8b41-a06b0e59e5a6-kube-api-access-kcrwh\") pod \"openstack-operator-index-2b8zx\" (UID: \"c726a782-07ef-4215-8b41-a06b0e59e5a6\") " pod="openstack-operators/openstack-operator-index-2b8zx" Nov 21 20:22:41 crc kubenswrapper[4727]: I1121 20:22:41.853638 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcrwh\" (UniqueName: \"kubernetes.io/projected/c726a782-07ef-4215-8b41-a06b0e59e5a6-kube-api-access-kcrwh\") pod \"openstack-operator-index-2b8zx\" (UID: \"c726a782-07ef-4215-8b41-a06b0e59e5a6\") " pod="openstack-operators/openstack-operator-index-2b8zx" Nov 21 20:22:41 crc kubenswrapper[4727]: I1121 20:22:41.956303 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2b8zx" Nov 21 20:22:42 crc kubenswrapper[4727]: I1121 20:22:42.406438 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2b8zx"] Nov 21 20:22:43 crc kubenswrapper[4727]: I1121 20:22:43.130508 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2b8zx" event={"ID":"c726a782-07ef-4215-8b41-a06b0e59e5a6","Type":"ContainerStarted","Data":"c992bdfda43c7c61d7127292f26d8aa94498d7824b5dd188e764a7619bcb87f3"} Nov 21 20:22:43 crc kubenswrapper[4727]: I1121 20:22:43.335541 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:22:43 crc kubenswrapper[4727]: I1121 20:22:43.335608 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:22:44 crc kubenswrapper[4727]: I1121 20:22:44.965622 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2b8zx"] Nov 21 20:22:45 crc kubenswrapper[4727]: I1121 20:22:45.162534 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2b8zx" event={"ID":"c726a782-07ef-4215-8b41-a06b0e59e5a6","Type":"ContainerStarted","Data":"7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa"} Nov 21 20:22:45 crc kubenswrapper[4727]: I1121 20:22:45.187350 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2b8zx" podStartSLOduration=1.839935453 podStartE2EDuration="4.187332169s" podCreationTimestamp="2025-11-21 20:22:41 +0000 UTC" firstStartedPulling="2025-11-21 20:22:42.414152662 +0000 UTC m=+967.600337706" lastFinishedPulling="2025-11-21 20:22:44.761549378 +0000 UTC m=+969.947734422" observedRunningTime="2025-11-21 20:22:45.18043518 +0000 UTC m=+970.366620224" watchObservedRunningTime="2025-11-21 20:22:45.187332169 +0000 UTC m=+970.373517233" Nov 21 20:22:45 crc kubenswrapper[4727]: I1121 20:22:45.577647 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fs7lr"] Nov 21 20:22:45 crc kubenswrapper[4727]: I1121 20:22:45.586187 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fs7lr" Nov 21 20:22:45 crc kubenswrapper[4727]: I1121 20:22:45.603335 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fs7lr"] Nov 21 20:22:45 crc kubenswrapper[4727]: I1121 20:22:45.696791 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmvc\" (UniqueName: \"kubernetes.io/projected/f726d023-7d18-4472-88ac-f9d99c7e1279-kube-api-access-psmvc\") pod \"openstack-operator-index-fs7lr\" (UID: \"f726d023-7d18-4472-88ac-f9d99c7e1279\") " pod="openstack-operators/openstack-operator-index-fs7lr" Nov 21 20:22:45 crc kubenswrapper[4727]: I1121 20:22:45.798112 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psmvc\" (UniqueName: \"kubernetes.io/projected/f726d023-7d18-4472-88ac-f9d99c7e1279-kube-api-access-psmvc\") pod \"openstack-operator-index-fs7lr\" (UID: \"f726d023-7d18-4472-88ac-f9d99c7e1279\") " pod="openstack-operators/openstack-operator-index-fs7lr" Nov 21 20:22:45 crc kubenswrapper[4727]: I1121 20:22:45.817722 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psmvc\" (UniqueName: \"kubernetes.io/projected/f726d023-7d18-4472-88ac-f9d99c7e1279-kube-api-access-psmvc\") pod \"openstack-operator-index-fs7lr\" (UID: \"f726d023-7d18-4472-88ac-f9d99c7e1279\") " pod="openstack-operators/openstack-operator-index-fs7lr" Nov 21 20:22:45 crc kubenswrapper[4727]: I1121 20:22:45.912412 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fs7lr" Nov 21 20:22:46 crc kubenswrapper[4727]: I1121 20:22:46.168314 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2b8zx" podUID="c726a782-07ef-4215-8b41-a06b0e59e5a6" containerName="registry-server" containerID="cri-o://7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa" gracePeriod=2 Nov 21 20:22:46 crc kubenswrapper[4727]: I1121 20:22:46.356986 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fs7lr"] Nov 21 20:22:46 crc kubenswrapper[4727]: I1121 20:22:46.579256 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2b8zx" Nov 21 20:22:46 crc kubenswrapper[4727]: I1121 20:22:46.612667 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcrwh\" (UniqueName: \"kubernetes.io/projected/c726a782-07ef-4215-8b41-a06b0e59e5a6-kube-api-access-kcrwh\") pod \"c726a782-07ef-4215-8b41-a06b0e59e5a6\" (UID: \"c726a782-07ef-4215-8b41-a06b0e59e5a6\") " Nov 21 20:22:46 crc kubenswrapper[4727]: I1121 20:22:46.617865 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c726a782-07ef-4215-8b41-a06b0e59e5a6-kube-api-access-kcrwh" (OuterVolumeSpecName: "kube-api-access-kcrwh") pod "c726a782-07ef-4215-8b41-a06b0e59e5a6" (UID: "c726a782-07ef-4215-8b41-a06b0e59e5a6"). InnerVolumeSpecName "kube-api-access-kcrwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:22:46 crc kubenswrapper[4727]: I1121 20:22:46.722493 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcrwh\" (UniqueName: \"kubernetes.io/projected/c726a782-07ef-4215-8b41-a06b0e59e5a6-kube-api-access-kcrwh\") on node \"crc\" DevicePath \"\"" Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.177474 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fs7lr" event={"ID":"f726d023-7d18-4472-88ac-f9d99c7e1279","Type":"ContainerStarted","Data":"bd633ae4711ce37ea2f2dacf1787a26ef62590ac7db5e5714607d4cbff2be0dd"} Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.177783 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fs7lr" event={"ID":"f726d023-7d18-4472-88ac-f9d99c7e1279","Type":"ContainerStarted","Data":"f010c01c9a7acfef8dc13d3a4541dc2dd47ad4796a6fdcc6c8a69c295fcb2baf"} Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.179596 4727 generic.go:334] "Generic (PLEG): container finished" podID="c726a782-07ef-4215-8b41-a06b0e59e5a6" containerID="7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa" exitCode=0 Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.179629 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2b8zx" event={"ID":"c726a782-07ef-4215-8b41-a06b0e59e5a6","Type":"ContainerDied","Data":"7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa"} Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.179648 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2b8zx" event={"ID":"c726a782-07ef-4215-8b41-a06b0e59e5a6","Type":"ContainerDied","Data":"c992bdfda43c7c61d7127292f26d8aa94498d7824b5dd188e764a7619bcb87f3"} Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.179663 4727 scope.go:117] "RemoveContainer" containerID="7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa" Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.179849 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2b8zx" Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.191942 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fs7lr" podStartSLOduration=2.1377644670000002 podStartE2EDuration="2.191919241s" podCreationTimestamp="2025-11-21 20:22:45 +0000 UTC" firstStartedPulling="2025-11-21 20:22:46.372512407 +0000 UTC m=+971.558697451" lastFinishedPulling="2025-11-21 20:22:46.426667181 +0000 UTC m=+971.612852225" observedRunningTime="2025-11-21 20:22:47.191659105 +0000 UTC m=+972.377844159" watchObservedRunningTime="2025-11-21 20:22:47.191919241 +0000 UTC m=+972.378104285" Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.209414 4727 scope.go:117] "RemoveContainer" containerID="7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa" Nov 21 20:22:47 crc kubenswrapper[4727]: E1121 20:22:47.209915 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa\": container with ID starting with 7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa not found: ID does not exist" containerID="7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa" Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.209948 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa"} err="failed to get container status \"7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa\": rpc error: code = NotFound desc = could not find container \"7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa\": container with ID starting with 7a15033476ac52fef7e96def137f184ec14bef2e0a7a2e6b27a5966e7b92fcaa not found: ID does not exist" Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.216865 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2b8zx"] Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.221952 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-2b8zx"] Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.511678 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c726a782-07ef-4215-8b41-a06b0e59e5a6" path="/var/lib/kubelet/pods/c726a782-07ef-4215-8b41-a06b0e59e5a6/volumes" Nov 21 20:22:47 crc kubenswrapper[4727]: I1121 20:22:47.784490 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pv4hs" Nov 21 20:22:55 crc kubenswrapper[4727]: I1121 20:22:55.912594 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fs7lr" Nov 21 20:22:55 crc kubenswrapper[4727]: I1121 20:22:55.913136 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fs7lr" Nov 21 20:22:55 crc kubenswrapper[4727]: I1121 20:22:55.939473 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fs7lr" Nov 21 20:22:56 crc kubenswrapper[4727]: I1121 20:22:56.322939 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fs7lr" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.037448 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw"] Nov 21 20:23:03 crc kubenswrapper[4727]: E1121 20:23:03.038445 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c726a782-07ef-4215-8b41-a06b0e59e5a6" containerName="registry-server" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.038461 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c726a782-07ef-4215-8b41-a06b0e59e5a6" containerName="registry-server" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.038659 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c726a782-07ef-4215-8b41-a06b0e59e5a6" containerName="registry-server" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.042694 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.045257 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9x6qk" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.049331 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw"] Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.122420 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-bundle\") pod \"b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.122507 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-util\") pod \"b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.122622 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fblg\" (UniqueName: \"kubernetes.io/projected/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-kube-api-access-8fblg\") pod \"b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.224317 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-bundle\") pod \"b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.224385 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-util\") pod \"b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.224409 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fblg\" (UniqueName: \"kubernetes.io/projected/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-kube-api-access-8fblg\") pod \"b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.224830 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-bundle\") pod \"b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.224938 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-util\") pod \"b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.247885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fblg\" (UniqueName: \"kubernetes.io/projected/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-kube-api-access-8fblg\") pod \"b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.372045 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:03 crc kubenswrapper[4727]: I1121 20:23:03.820978 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw"] Nov 21 20:23:04 crc kubenswrapper[4727]: I1121 20:23:04.363007 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" containerID="57e7a2e0491a6929f3d5303478e0ee1b3ca799c0430d38d278f9098d7af196f1" exitCode=0 Nov 21 20:23:04 crc kubenswrapper[4727]: I1121 20:23:04.363075 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" event={"ID":"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac","Type":"ContainerDied","Data":"57e7a2e0491a6929f3d5303478e0ee1b3ca799c0430d38d278f9098d7af196f1"} Nov 21 20:23:04 crc kubenswrapper[4727]: I1121 20:23:04.364617 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" event={"ID":"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac","Type":"ContainerStarted","Data":"6a0aad4a84ae7065e15cc90b0bfe0fc499931b6932905559ca255bd9586b054a"} Nov 21 20:23:05 crc kubenswrapper[4727]: I1121 20:23:05.373383 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" containerID="cd722f34ac8767f5826639a9b32a4516b91d549e7864ed75e6076dd9e2fea275" exitCode=0 Nov 21 20:23:05 crc kubenswrapper[4727]: I1121 20:23:05.373424 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" event={"ID":"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac","Type":"ContainerDied","Data":"cd722f34ac8767f5826639a9b32a4516b91d549e7864ed75e6076dd9e2fea275"} Nov 21 20:23:06 crc kubenswrapper[4727]: I1121 20:23:06.383177 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" containerID="9580d5e0fece0f15561d15490c205e8c3f255f6043cda53dbb73ffc6a8b71024" exitCode=0 Nov 21 20:23:06 crc kubenswrapper[4727]: I1121 20:23:06.383265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" event={"ID":"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac","Type":"ContainerDied","Data":"9580d5e0fece0f15561d15490c205e8c3f255f6043cda53dbb73ffc6a8b71024"} Nov 21 20:23:07 crc kubenswrapper[4727]: I1121 20:23:07.762428 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:07 crc kubenswrapper[4727]: I1121 20:23:07.825529 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fblg\" (UniqueName: \"kubernetes.io/projected/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-kube-api-access-8fblg\") pod \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " Nov 21 20:23:07 crc kubenswrapper[4727]: I1121 20:23:07.825596 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-util\") pod \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " Nov 21 20:23:07 crc kubenswrapper[4727]: I1121 20:23:07.825724 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-bundle\") pod \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\" (UID: \"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac\") " Nov 21 20:23:07 crc kubenswrapper[4727]: I1121 20:23:07.827205 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-bundle" (OuterVolumeSpecName: "bundle") pod "b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" (UID: "b1736fb1-f2c3-482b-ad32-1c7a3be3fbac"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:23:07 crc kubenswrapper[4727]: I1121 20:23:07.837828 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-kube-api-access-8fblg" (OuterVolumeSpecName: "kube-api-access-8fblg") pod "b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" (UID: "b1736fb1-f2c3-482b-ad32-1c7a3be3fbac"). InnerVolumeSpecName "kube-api-access-8fblg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:23:07 crc kubenswrapper[4727]: I1121 20:23:07.851743 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-util" (OuterVolumeSpecName: "util") pod "b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" (UID: "b1736fb1-f2c3-482b-ad32-1c7a3be3fbac"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:23:07 crc kubenswrapper[4727]: I1121 20:23:07.927260 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fblg\" (UniqueName: \"kubernetes.io/projected/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-kube-api-access-8fblg\") on node \"crc\" DevicePath \"\"" Nov 21 20:23:07 crc kubenswrapper[4727]: I1121 20:23:07.927294 4727 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-util\") on node \"crc\" DevicePath \"\"" Nov 21 20:23:07 crc kubenswrapper[4727]: I1121 20:23:07.927304 4727 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b1736fb1-f2c3-482b-ad32-1c7a3be3fbac-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:23:08 crc kubenswrapper[4727]: I1121 20:23:08.404157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" event={"ID":"b1736fb1-f2c3-482b-ad32-1c7a3be3fbac","Type":"ContainerDied","Data":"6a0aad4a84ae7065e15cc90b0bfe0fc499931b6932905559ca255bd9586b054a"} Nov 21 20:23:08 crc kubenswrapper[4727]: I1121 20:23:08.404499 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a0aad4a84ae7065e15cc90b0bfe0fc499931b6932905559ca255bd9586b054a" Nov 21 20:23:08 crc kubenswrapper[4727]: I1121 20:23:08.404227 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw" Nov 21 20:23:10 crc kubenswrapper[4727]: I1121 20:23:10.931132 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w"] Nov 21 20:23:10 crc kubenswrapper[4727]: E1121 20:23:10.931884 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" containerName="pull" Nov 21 20:23:10 crc kubenswrapper[4727]: I1121 20:23:10.931908 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" containerName="pull" Nov 21 20:23:10 crc kubenswrapper[4727]: E1121 20:23:10.931940 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" containerName="util" Nov 21 20:23:10 crc kubenswrapper[4727]: I1121 20:23:10.931954 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" containerName="util" Nov 21 20:23:10 crc kubenswrapper[4727]: E1121 20:23:10.932048 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" containerName="extract" Nov 21 20:23:10 crc kubenswrapper[4727]: I1121 20:23:10.932064 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" containerName="extract" Nov 21 20:23:10 crc kubenswrapper[4727]: I1121 20:23:10.932285 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1736fb1-f2c3-482b-ad32-1c7a3be3fbac" containerName="extract" Nov 21 20:23:10 crc kubenswrapper[4727]: I1121 20:23:10.933661 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" Nov 21 20:23:10 crc kubenswrapper[4727]: I1121 20:23:10.935884 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-xlzlj" Nov 21 20:23:10 crc kubenswrapper[4727]: I1121 20:23:10.958057 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w"] Nov 21 20:23:10 crc kubenswrapper[4727]: I1121 20:23:10.974680 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckdbb\" (UniqueName: \"kubernetes.io/projected/69fc4f58-1b30-4ab8-911d-3b79b0fc149f-kube-api-access-ckdbb\") pod \"openstack-operator-controller-operator-698b9558cf-j7t2w\" (UID: \"69fc4f58-1b30-4ab8-911d-3b79b0fc149f\") " pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" Nov 21 20:23:11 crc kubenswrapper[4727]: I1121 20:23:11.076115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckdbb\" (UniqueName: \"kubernetes.io/projected/69fc4f58-1b30-4ab8-911d-3b79b0fc149f-kube-api-access-ckdbb\") pod \"openstack-operator-controller-operator-698b9558cf-j7t2w\" (UID: \"69fc4f58-1b30-4ab8-911d-3b79b0fc149f\") " pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" Nov 21 20:23:11 crc kubenswrapper[4727]: I1121 20:23:11.092491 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckdbb\" (UniqueName: \"kubernetes.io/projected/69fc4f58-1b30-4ab8-911d-3b79b0fc149f-kube-api-access-ckdbb\") pod \"openstack-operator-controller-operator-698b9558cf-j7t2w\" (UID: \"69fc4f58-1b30-4ab8-911d-3b79b0fc149f\") " pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" Nov 21 20:23:11 crc kubenswrapper[4727]: I1121 20:23:11.254600 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" Nov 21 20:23:11 crc kubenswrapper[4727]: I1121 20:23:11.706905 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w"] Nov 21 20:23:12 crc kubenswrapper[4727]: I1121 20:23:12.439367 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" event={"ID":"69fc4f58-1b30-4ab8-911d-3b79b0fc149f","Type":"ContainerStarted","Data":"4bcd5c188bdd755f3ba97d235247eb118e6aedbd26901eb72ba176b2fba8bda8"} Nov 21 20:23:13 crc kubenswrapper[4727]: I1121 20:23:13.335385 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:23:13 crc kubenswrapper[4727]: I1121 20:23:13.335477 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:23:13 crc kubenswrapper[4727]: I1121 20:23:13.335526 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:23:13 crc kubenswrapper[4727]: I1121 20:23:13.336165 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99a25828b83906fc3ad93f0b2554a2773e360925428b620333e0ead4ac93025d"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:23:13 crc kubenswrapper[4727]: I1121 20:23:13.336242 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://99a25828b83906fc3ad93f0b2554a2773e360925428b620333e0ead4ac93025d" gracePeriod=600 Nov 21 20:23:14 crc kubenswrapper[4727]: I1121 20:23:14.470762 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="99a25828b83906fc3ad93f0b2554a2773e360925428b620333e0ead4ac93025d" exitCode=0 Nov 21 20:23:14 crc kubenswrapper[4727]: I1121 20:23:14.470847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"99a25828b83906fc3ad93f0b2554a2773e360925428b620333e0ead4ac93025d"} Nov 21 20:23:14 crc kubenswrapper[4727]: I1121 20:23:14.471023 4727 scope.go:117] "RemoveContainer" containerID="8107c47f566f1faaec586576a9a03e2bdc957a4e69dfa96e7c5b9dd43a6f4ab5" Nov 21 20:23:15 crc kubenswrapper[4727]: I1121 20:23:15.479518 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" event={"ID":"69fc4f58-1b30-4ab8-911d-3b79b0fc149f","Type":"ContainerStarted","Data":"9021f27da12d53d76b8d6b866bb07cc461649ec4c70a3ffaa63ca510a70b63d0"} Nov 21 20:23:15 crc kubenswrapper[4727]: I1121 20:23:15.481266 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"12b90de2a7321048685d69f0637a8522048d88e44715706aab3817c22993e4d5"} Nov 21 20:23:17 crc kubenswrapper[4727]: I1121 20:23:17.508325 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" Nov 21 20:23:17 crc kubenswrapper[4727]: I1121 20:23:17.508833 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" event={"ID":"69fc4f58-1b30-4ab8-911d-3b79b0fc149f","Type":"ContainerStarted","Data":"8edbd69d67828f0e2327e18e90980d667c4740dd1e118f40244e5557e909d418"} Nov 21 20:23:17 crc kubenswrapper[4727]: I1121 20:23:17.531909 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" podStartSLOduration=2.084085562 podStartE2EDuration="7.531892189s" podCreationTimestamp="2025-11-21 20:23:10 +0000 UTC" firstStartedPulling="2025-11-21 20:23:11.718427209 +0000 UTC m=+996.904612253" lastFinishedPulling="2025-11-21 20:23:17.166233836 +0000 UTC m=+1002.352418880" observedRunningTime="2025-11-21 20:23:17.526453703 +0000 UTC m=+1002.712638747" watchObservedRunningTime="2025-11-21 20:23:17.531892189 +0000 UTC m=+1002.718077233" Nov 21 20:23:21 crc kubenswrapper[4727]: I1121 20:23:21.258740 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-698b9558cf-j7t2w" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.280216 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.283013 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.289049 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.290865 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.294873 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-sw44n" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.304210 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.304646 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-k4lpk" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.332193 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.335425 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.339507 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-gzl59" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.395577 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.428522 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.435414 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-pqg58" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.436418 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj9ks\" (UniqueName: \"kubernetes.io/projected/3a00eb31-9565-4aad-bcea-bb52bc5bebdc-kube-api-access-vj9ks\") pod \"cinder-operator-controller-manager-6498cbf48f-bcf5p\" (UID: \"3a00eb31-9565-4aad-bcea-bb52bc5bebdc\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.436497 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94krx\" (UniqueName: \"kubernetes.io/projected/2413dc73-9b93-4670-ae64-86d116672e3c-kube-api-access-94krx\") pod \"barbican-operator-controller-manager-75fb479bcc-5lhbb\" (UID: \"2413dc73-9b93-4670-ae64-86d116672e3c\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.436529 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb87k\" (UniqueName: \"kubernetes.io/projected/61cc6991-6299-4ec0-b51c-6130428804ac-kube-api-access-bb87k\") pod \"glance-operator-controller-manager-7969689c84-4dtbh\" (UID: \"61cc6991-6299-4ec0-b51c-6130428804ac\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.436566 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7prr\" (UniqueName: \"kubernetes.io/projected/644c8e80-e81c-45f1-b5f8-d561cccc89cb-kube-api-access-d7prr\") pod \"designate-operator-controller-manager-767ccfd65f-gwm4n\" (UID: \"644c8e80-e81c-45f1-b5f8-d561cccc89cb\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.440118 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.453628 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.460352 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.477025 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.478345 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.491444 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.492523 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8fn79" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.497570 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.502815 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.512679 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-kmnnw" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.516043 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.517365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.521276 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.521560 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qzv52" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.526198 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.534138 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.538074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94krx\" (UniqueName: \"kubernetes.io/projected/2413dc73-9b93-4670-ae64-86d116672e3c-kube-api-access-94krx\") pod \"barbican-operator-controller-manager-75fb479bcc-5lhbb\" (UID: \"2413dc73-9b93-4670-ae64-86d116672e3c\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.538120 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb87k\" (UniqueName: \"kubernetes.io/projected/61cc6991-6299-4ec0-b51c-6130428804ac-kube-api-access-bb87k\") pod \"glance-operator-controller-manager-7969689c84-4dtbh\" (UID: \"61cc6991-6299-4ec0-b51c-6130428804ac\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.538154 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7prr\" (UniqueName: \"kubernetes.io/projected/644c8e80-e81c-45f1-b5f8-d561cccc89cb-kube-api-access-d7prr\") pod \"designate-operator-controller-manager-767ccfd65f-gwm4n\" (UID: \"644c8e80-e81c-45f1-b5f8-d561cccc89cb\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.538178 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f6gq\" (UniqueName: \"kubernetes.io/projected/bb18a2fd-7fdc-4bc7-94b4-7da1dd327817-kube-api-access-9f6gq\") pod \"infra-operator-controller-manager-6dd8864d7c-w9lxm\" (UID: \"bb18a2fd-7fdc-4bc7-94b4-7da1dd327817\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.538259 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp8d5\" (UniqueName: \"kubernetes.io/projected/c5f9fd34-f218-4a3e-9456-fcd46dafb4ad-kube-api-access-fp8d5\") pod \"horizon-operator-controller-manager-598f69df5d-rk7gg\" (UID: \"c5f9fd34-f218-4a3e-9456-fcd46dafb4ad\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.538277 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb18a2fd-7fdc-4bc7-94b4-7da1dd327817-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-w9lxm\" (UID: \"bb18a2fd-7fdc-4bc7-94b4-7da1dd327817\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.538294 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85t92\" (UniqueName: \"kubernetes.io/projected/c1898484-e606-4222-9cd0-221a437d815c-kube-api-access-85t92\") pod \"heat-operator-controller-manager-56f54d6746-9g7x9\" (UID: \"c1898484-e606-4222-9cd0-221a437d815c\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.538312 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj9ks\" (UniqueName: \"kubernetes.io/projected/3a00eb31-9565-4aad-bcea-bb52bc5bebdc-kube-api-access-vj9ks\") pod \"cinder-operator-controller-manager-6498cbf48f-bcf5p\" (UID: \"3a00eb31-9565-4aad-bcea-bb52bc5bebdc\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.545751 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.547160 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.556560 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-dj8pd" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.557598 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.558897 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.563625 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hlfbs" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.565851 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.578928 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj9ks\" (UniqueName: \"kubernetes.io/projected/3a00eb31-9565-4aad-bcea-bb52bc5bebdc-kube-api-access-vj9ks\") pod \"cinder-operator-controller-manager-6498cbf48f-bcf5p\" (UID: \"3a00eb31-9565-4aad-bcea-bb52bc5bebdc\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.579741 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94krx\" (UniqueName: \"kubernetes.io/projected/2413dc73-9b93-4670-ae64-86d116672e3c-kube-api-access-94krx\") pod \"barbican-operator-controller-manager-75fb479bcc-5lhbb\" (UID: \"2413dc73-9b93-4670-ae64-86d116672e3c\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.587347 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7prr\" (UniqueName: \"kubernetes.io/projected/644c8e80-e81c-45f1-b5f8-d561cccc89cb-kube-api-access-d7prr\") pod \"designate-operator-controller-manager-767ccfd65f-gwm4n\" (UID: \"644c8e80-e81c-45f1-b5f8-d561cccc89cb\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.591308 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.595535 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb87k\" (UniqueName: \"kubernetes.io/projected/61cc6991-6299-4ec0-b51c-6130428804ac-kube-api-access-bb87k\") pod \"glance-operator-controller-manager-7969689c84-4dtbh\" (UID: \"61cc6991-6299-4ec0-b51c-6130428804ac\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.609057 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.620304 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.621640 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.629879 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-f4k2n" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.632898 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.641129 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp8d5\" (UniqueName: \"kubernetes.io/projected/c5f9fd34-f218-4a3e-9456-fcd46dafb4ad-kube-api-access-fp8d5\") pod \"horizon-operator-controller-manager-598f69df5d-rk7gg\" (UID: \"c5f9fd34-f218-4a3e-9456-fcd46dafb4ad\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.641173 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb18a2fd-7fdc-4bc7-94b4-7da1dd327817-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-w9lxm\" (UID: \"bb18a2fd-7fdc-4bc7-94b4-7da1dd327817\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.641191 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85t92\" (UniqueName: \"kubernetes.io/projected/c1898484-e606-4222-9cd0-221a437d815c-kube-api-access-85t92\") pod \"heat-operator-controller-manager-56f54d6746-9g7x9\" (UID: \"c1898484-e606-4222-9cd0-221a437d815c\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.641288 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f6gq\" (UniqueName: \"kubernetes.io/projected/bb18a2fd-7fdc-4bc7-94b4-7da1dd327817-kube-api-access-9f6gq\") pod \"infra-operator-controller-manager-6dd8864d7c-w9lxm\" (UID: \"bb18a2fd-7fdc-4bc7-94b4-7da1dd327817\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.644108 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.653273 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb18a2fd-7fdc-4bc7-94b4-7da1dd327817-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-w9lxm\" (UID: \"bb18a2fd-7fdc-4bc7-94b4-7da1dd327817\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.672596 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85t92\" (UniqueName: \"kubernetes.io/projected/c1898484-e606-4222-9cd0-221a437d815c-kube-api-access-85t92\") pod \"heat-operator-controller-manager-56f54d6746-9g7x9\" (UID: \"c1898484-e606-4222-9cd0-221a437d815c\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.679512 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp8d5\" (UniqueName: \"kubernetes.io/projected/c5f9fd34-f218-4a3e-9456-fcd46dafb4ad-kube-api-access-fp8d5\") pod \"horizon-operator-controller-manager-598f69df5d-rk7gg\" (UID: \"c5f9fd34-f218-4a3e-9456-fcd46dafb4ad\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.682012 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.683567 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.684069 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.706186 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9fgvd" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.707032 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f6gq\" (UniqueName: \"kubernetes.io/projected/bb18a2fd-7fdc-4bc7-94b4-7da1dd327817-kube-api-access-9f6gq\") pod \"infra-operator-controller-manager-6dd8864d7c-w9lxm\" (UID: \"bb18a2fd-7fdc-4bc7-94b4-7da1dd327817\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.717751 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.725457 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.727685 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kdzdc" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.738731 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.740188 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.743481 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj7tq\" (UniqueName: \"kubernetes.io/projected/4ac64528-2d7a-466d-a0f9-5433cc0a482d-kube-api-access-bj7tq\") pod \"manila-operator-controller-manager-58f887965d-r4gvw\" (UID: \"4ac64528-2d7a-466d-a0f9-5433cc0a482d\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.743526 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmkbl\" (UniqueName: \"kubernetes.io/projected/ff8b3ca8-d78f-448b-ace0-37bcc5408daf-kube-api-access-rmkbl\") pod \"ironic-operator-controller-manager-99b499f4-xm2x9\" (UID: \"ff8b3ca8-d78f-448b-ace0-37bcc5408daf\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.743550 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj69n\" (UniqueName: \"kubernetes.io/projected/70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8-kube-api-access-nj69n\") pod \"keystone-operator-controller-manager-7454b96578-tlj2c\" (UID: \"70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.744351 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s46vl" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.751979 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.771634 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.779411 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.780780 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.809149 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.813683 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.819812 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-pkqgg" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.822262 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.824861 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.841984 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.845709 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj7tq\" (UniqueName: \"kubernetes.io/projected/4ac64528-2d7a-466d-a0f9-5433cc0a482d-kube-api-access-bj7tq\") pod \"manila-operator-controller-manager-58f887965d-r4gvw\" (UID: \"4ac64528-2d7a-466d-a0f9-5433cc0a482d\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.845743 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr4g6\" (UniqueName: \"kubernetes.io/projected/77869999-156b-4be3-a845-46914c095836-kube-api-access-lr4g6\") pod \"neutron-operator-controller-manager-78bd47f458-frkcb\" (UID: \"77869999-156b-4be3-a845-46914c095836\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.845917 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmkbl\" (UniqueName: \"kubernetes.io/projected/ff8b3ca8-d78f-448b-ace0-37bcc5408daf-kube-api-access-rmkbl\") pod \"ironic-operator-controller-manager-99b499f4-xm2x9\" (UID: \"ff8b3ca8-d78f-448b-ace0-37bcc5408daf\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.847748 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj69n\" (UniqueName: \"kubernetes.io/projected/70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8-kube-api-access-nj69n\") pod \"keystone-operator-controller-manager-7454b96578-tlj2c\" (UID: \"70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.847984 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdqh\" (UniqueName: \"kubernetes.io/projected/4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf-kube-api-access-drdqh\") pod \"mariadb-operator-controller-manager-54b5986bb8-jxb65\" (UID: \"4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.848129 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b95bz\" (UniqueName: \"kubernetes.io/projected/e89e3523-4eb5-4ea3-997f-132e08f6ef4d-kube-api-access-b95bz\") pod \"nova-operator-controller-manager-cfbb9c588-h792g\" (UID: \"e89e3523-4eb5-4ea3-997f-132e08f6ef4d\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.847317 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.851201 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.855164 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zxj2s" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.855211 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.855329 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.872190 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.873598 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.875563 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmkbl\" (UniqueName: \"kubernetes.io/projected/ff8b3ca8-d78f-448b-ace0-37bcc5408daf-kube-api-access-rmkbl\") pod \"ironic-operator-controller-manager-99b499f4-xm2x9\" (UID: \"ff8b3ca8-d78f-448b-ace0-37bcc5408daf\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.877605 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tgc6v" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.878409 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.884363 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj69n\" (UniqueName: \"kubernetes.io/projected/70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8-kube-api-access-nj69n\") pod \"keystone-operator-controller-manager-7454b96578-tlj2c\" (UID: \"70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.893673 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj7tq\" (UniqueName: \"kubernetes.io/projected/4ac64528-2d7a-466d-a0f9-5433cc0a482d-kube-api-access-bj7tq\") pod \"manila-operator-controller-manager-58f887965d-r4gvw\" (UID: \"4ac64528-2d7a-466d-a0f9-5433cc0a482d\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.939030 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q"] Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.951623 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b95bz\" (UniqueName: \"kubernetes.io/projected/e89e3523-4eb5-4ea3-997f-132e08f6ef4d-kube-api-access-b95bz\") pod \"nova-operator-controller-manager-cfbb9c588-h792g\" (UID: \"e89e3523-4eb5-4ea3-997f-132e08f6ef4d\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.951937 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwhz\" (UniqueName: \"kubernetes.io/projected/8a5a299a-95be-4bfa-b3f4-c5a8bf71667a-kube-api-access-9lwhz\") pod \"octavia-operator-controller-manager-54cfbf4c7d-649jq\" (UID: \"8a5a299a-95be-4bfa-b3f4-c5a8bf71667a\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.952925 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr4g6\" (UniqueName: \"kubernetes.io/projected/77869999-156b-4be3-a845-46914c095836-kube-api-access-lr4g6\") pod \"neutron-operator-controller-manager-78bd47f458-frkcb\" (UID: \"77869999-156b-4be3-a845-46914c095836\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.953403 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drdqh\" (UniqueName: \"kubernetes.io/projected/4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf-kube-api-access-drdqh\") pod \"mariadb-operator-controller-manager-54b5986bb8-jxb65\" (UID: \"4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.954758 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.956941 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8drg6" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.991737 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drdqh\" (UniqueName: \"kubernetes.io/projected/4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf-kube-api-access-drdqh\") pod \"mariadb-operator-controller-manager-54b5986bb8-jxb65\" (UID: \"4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.992852 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b95bz\" (UniqueName: \"kubernetes.io/projected/e89e3523-4eb5-4ea3-997f-132e08f6ef4d-kube-api-access-b95bz\") pod \"nova-operator-controller-manager-cfbb9c588-h792g\" (UID: \"e89e3523-4eb5-4ea3-997f-132e08f6ef4d\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" Nov 21 20:23:38 crc kubenswrapper[4727]: I1121 20:23:38.992983 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr4g6\" (UniqueName: \"kubernetes.io/projected/77869999-156b-4be3-a845-46914c095836-kube-api-access-lr4g6\") pod \"neutron-operator-controller-manager-78bd47f458-frkcb\" (UID: \"77869999-156b-4be3-a845-46914c095836\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.027244 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.041894 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.054854 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwhz\" (UniqueName: \"kubernetes.io/projected/8a5a299a-95be-4bfa-b3f4-c5a8bf71667a-kube-api-access-9lwhz\") pod \"octavia-operator-controller-manager-54cfbf4c7d-649jq\" (UID: \"8a5a299a-95be-4bfa-b3f4-c5a8bf71667a\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.054905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkctl\" (UniqueName: \"kubernetes.io/projected/ac4c617b-d373-46ed-97e7-a8b1b8dae48b-kube-api-access-lkctl\") pod \"ovn-operator-controller-manager-54fc5f65b7-7j4bn\" (UID: \"ac4c617b-d373-46ed-97e7-a8b1b8dae48b\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.054943 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8sbb\" (UniqueName: \"kubernetes.io/projected/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-kube-api-access-f8sbb\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-57mrl\" (UID: \"8cbf6c6b-39fa-42bd-a722-0145c725d4cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.054986 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-57mrl\" (UID: \"8cbf6c6b-39fa-42bd-a722-0145c725d4cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.055015 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjmxs\" (UniqueName: \"kubernetes.io/projected/9af4ba92-9c28-4836-94ab-bf82bbf14047-kube-api-access-fjmxs\") pod \"placement-operator-controller-manager-5b797b8dff-dhl6q\" (UID: \"9af4ba92-9c28-4836-94ab-bf82bbf14047\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.077914 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwhz\" (UniqueName: \"kubernetes.io/projected/8a5a299a-95be-4bfa-b3f4-c5a8bf71667a-kube-api-access-9lwhz\") pod \"octavia-operator-controller-manager-54cfbf4c7d-649jq\" (UID: \"8a5a299a-95be-4bfa-b3f4-c5a8bf71667a\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.077998 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.079708 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.086500 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5cl9q" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.102327 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.107434 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.112166 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.132059 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.141416 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.142908 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.145686 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wckm4" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.159820 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.160093 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.160499 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkctl\" (UniqueName: \"kubernetes.io/projected/ac4c617b-d373-46ed-97e7-a8b1b8dae48b-kube-api-access-lkctl\") pod \"ovn-operator-controller-manager-54fc5f65b7-7j4bn\" (UID: \"ac4c617b-d373-46ed-97e7-a8b1b8dae48b\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.160551 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8sbb\" (UniqueName: \"kubernetes.io/projected/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-kube-api-access-f8sbb\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-57mrl\" (UID: \"8cbf6c6b-39fa-42bd-a722-0145c725d4cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.160588 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-57mrl\" (UID: \"8cbf6c6b-39fa-42bd-a722-0145c725d4cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.160620 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjmxs\" (UniqueName: \"kubernetes.io/projected/9af4ba92-9c28-4836-94ab-bf82bbf14047-kube-api-access-fjmxs\") pod \"placement-operator-controller-manager-5b797b8dff-dhl6q\" (UID: \"9af4ba92-9c28-4836-94ab-bf82bbf14047\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.160668 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28zp8\" (UniqueName: \"kubernetes.io/projected/2968b604-6efc-421c-8435-12cc7303a604-kube-api-access-28zp8\") pod \"telemetry-operator-controller-manager-776668cd95-zhcl9\" (UID: \"2968b604-6efc-421c-8435-12cc7303a604\") " pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.160690 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd7j4\" (UniqueName: \"kubernetes.io/projected/6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa-kube-api-access-dd7j4\") pod \"swift-operator-controller-manager-d656998f4-d7cz4\" (UID: \"6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" Nov 21 20:23:39 crc kubenswrapper[4727]: E1121 20:23:39.160864 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 20:23:39 crc kubenswrapper[4727]: E1121 20:23:39.161481 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-cert podName:8cbf6c6b-39fa-42bd-a722-0145c725d4cf nodeName:}" failed. No retries permitted until 2025-11-21 20:23:39.661459052 +0000 UTC m=+1024.847644096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" (UID: "8cbf6c6b-39fa-42bd-a722-0145c725d4cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.201200 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.202907 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjmxs\" (UniqueName: \"kubernetes.io/projected/9af4ba92-9c28-4836-94ab-bf82bbf14047-kube-api-access-fjmxs\") pod \"placement-operator-controller-manager-5b797b8dff-dhl6q\" (UID: \"9af4ba92-9c28-4836-94ab-bf82bbf14047\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.205357 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkctl\" (UniqueName: \"kubernetes.io/projected/ac4c617b-d373-46ed-97e7-a8b1b8dae48b-kube-api-access-lkctl\") pod \"ovn-operator-controller-manager-54fc5f65b7-7j4bn\" (UID: \"ac4c617b-d373-46ed-97e7-a8b1b8dae48b\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.235443 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8sbb\" (UniqueName: \"kubernetes.io/projected/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-kube-api-access-f8sbb\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-57mrl\" (UID: \"8cbf6c6b-39fa-42bd-a722-0145c725d4cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.238448 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.252804 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-bn475"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.254175 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.256026 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-bn475"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.266800 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xb255" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.268133 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.270050 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4blq\" (UniqueName: \"kubernetes.io/projected/7090ae5b-e03d-470b-b87e-31623d9916b7-kube-api-access-b4blq\") pod \"test-operator-controller-manager-b4c496f69-bn475\" (UID: \"7090ae5b-e03d-470b-b87e-31623d9916b7\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.270155 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28zp8\" (UniqueName: \"kubernetes.io/projected/2968b604-6efc-421c-8435-12cc7303a604-kube-api-access-28zp8\") pod \"telemetry-operator-controller-manager-776668cd95-zhcl9\" (UID: \"2968b604-6efc-421c-8435-12cc7303a604\") " pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.270176 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd7j4\" (UniqueName: \"kubernetes.io/projected/6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa-kube-api-access-dd7j4\") pod \"swift-operator-controller-manager-d656998f4-d7cz4\" (UID: \"6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.301705 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd7j4\" (UniqueName: \"kubernetes.io/projected/6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa-kube-api-access-dd7j4\") pod \"swift-operator-controller-manager-d656998f4-d7cz4\" (UID: \"6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.303601 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.304359 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28zp8\" (UniqueName: \"kubernetes.io/projected/2968b604-6efc-421c-8435-12cc7303a604-kube-api-access-28zp8\") pod \"telemetry-operator-controller-manager-776668cd95-zhcl9\" (UID: \"2968b604-6efc-421c-8435-12cc7303a604\") " pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.305063 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.308024 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sds6x" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.328120 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.366462 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.368889 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.371999 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4blq\" (UniqueName: \"kubernetes.io/projected/7090ae5b-e03d-470b-b87e-31623d9916b7-kube-api-access-b4blq\") pod \"test-operator-controller-manager-b4c496f69-bn475\" (UID: \"7090ae5b-e03d-470b-b87e-31623d9916b7\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.372082 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wczlw\" (UniqueName: \"kubernetes.io/projected/db9d7c73-bf27-4b19-8aa6-d0c006a74309-kube-api-access-wczlw\") pod \"watcher-operator-controller-manager-8c6448b9f-gkz28\" (UID: \"db9d7c73-bf27-4b19-8aa6-d0c006a74309\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.372674 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.375216 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.375449 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6wf65" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.403255 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.407852 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4blq\" (UniqueName: \"kubernetes.io/projected/7090ae5b-e03d-470b-b87e-31623d9916b7-kube-api-access-b4blq\") pod \"test-operator-controller-manager-b4c496f69-bn475\" (UID: \"7090ae5b-e03d-470b-b87e-31623d9916b7\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.430956 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.439931 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.444424 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.445463 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.449705 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-4pmbb" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.462961 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.475778 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wczlw\" (UniqueName: \"kubernetes.io/projected/db9d7c73-bf27-4b19-8aa6-d0c006a74309-kube-api-access-wczlw\") pod \"watcher-operator-controller-manager-8c6448b9f-gkz28\" (UID: \"db9d7c73-bf27-4b19-8aa6-d0c006a74309\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.482934 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.501070 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.501656 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wczlw\" (UniqueName: \"kubernetes.io/projected/db9d7c73-bf27-4b19-8aa6-d0c006a74309-kube-api-access-wczlw\") pod \"watcher-operator-controller-manager-8c6448b9f-gkz28\" (UID: \"db9d7c73-bf27-4b19-8aa6-d0c006a74309\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.555995 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.557622 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.596076 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.610116 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgg6\" (UniqueName: \"kubernetes.io/projected/afbeb30f-0a67-4547-85e5-76cba72ee577-kube-api-access-tfgg6\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-z2t47\" (UID: \"afbeb30f-0a67-4547-85e5-76cba72ee577\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.610179 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfsm\" (UniqueName: \"kubernetes.io/projected/bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf-kube-api-access-4bfsm\") pod \"openstack-operator-controller-manager-85c6dbf684-4l2bg\" (UID: \"bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf\") " pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.610335 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf-cert\") pod \"openstack-operator-controller-manager-85c6dbf684-4l2bg\" (UID: \"bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf\") " pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.635088 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.690744 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" event={"ID":"3a00eb31-9565-4aad-bcea-bb52bc5bebdc","Type":"ContainerStarted","Data":"158c6d4f9bd15ebbdc311d5c9818ea8b7991cb1183e0ff67cd418383fa67cf46"} Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.703930 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" event={"ID":"2413dc73-9b93-4670-ae64-86d116672e3c","Type":"ContainerStarted","Data":"0218863b6709d5b9b1779baba42453035545f5afdba8387d0c1f0cc1d8501ba5"} Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.705186 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" event={"ID":"644c8e80-e81c-45f1-b5f8-d561cccc89cb","Type":"ContainerStarted","Data":"ab136700d2febaaca6a0c416873b5a1b0874175beed929a8cf138976f5f2cc1a"} Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.712394 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-57mrl\" (UID: \"8cbf6c6b-39fa-42bd-a722-0145c725d4cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.712465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf-cert\") pod \"openstack-operator-controller-manager-85c6dbf684-4l2bg\" (UID: \"bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf\") " pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.712534 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgg6\" (UniqueName: \"kubernetes.io/projected/afbeb30f-0a67-4547-85e5-76cba72ee577-kube-api-access-tfgg6\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-z2t47\" (UID: \"afbeb30f-0a67-4547-85e5-76cba72ee577\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.712556 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bfsm\" (UniqueName: \"kubernetes.io/projected/bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf-kube-api-access-4bfsm\") pod \"openstack-operator-controller-manager-85c6dbf684-4l2bg\" (UID: \"bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf\") " pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:23:39 crc kubenswrapper[4727]: E1121 20:23:39.714160 4727 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 21 20:23:39 crc kubenswrapper[4727]: E1121 20:23:39.714237 4727 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 20:23:39 crc kubenswrapper[4727]: E1121 20:23:39.714273 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf-cert podName:bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf nodeName:}" failed. No retries permitted until 2025-11-21 20:23:40.214237955 +0000 UTC m=+1025.400422999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf-cert") pod "openstack-operator-controller-manager-85c6dbf684-4l2bg" (UID: "bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf") : secret "webhook-server-cert" not found Nov 21 20:23:39 crc kubenswrapper[4727]: E1121 20:23:39.714326 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-cert podName:8cbf6c6b-39fa-42bd-a722-0145c725d4cf nodeName:}" failed. No retries permitted until 2025-11-21 20:23:40.714302327 +0000 UTC m=+1025.900487371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" (UID: "8cbf6c6b-39fa-42bd-a722-0145c725d4cf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.748127 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bfsm\" (UniqueName: \"kubernetes.io/projected/bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf-kube-api-access-4bfsm\") pod \"openstack-operator-controller-manager-85c6dbf684-4l2bg\" (UID: \"bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf\") " pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.750408 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgg6\" (UniqueName: \"kubernetes.io/projected/afbeb30f-0a67-4547-85e5-76cba72ee577-kube-api-access-tfgg6\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-z2t47\" (UID: \"afbeb30f-0a67-4547-85e5-76cba72ee577\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.751948 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh"] Nov 21 20:23:39 crc kubenswrapper[4727]: I1121 20:23:39.785875 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" Nov 21 20:23:39 crc kubenswrapper[4727]: W1121 20:23:39.807500 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61cc6991_6299_4ec0_b51c_6130428804ac.slice/crio-72aad5d4445bd5b4002525cca9b619c5846c864042c7e67d9d7ca0fb471565bd WatchSource:0}: Error finding container 72aad5d4445bd5b4002525cca9b619c5846c864042c7e67d9d7ca0fb471565bd: Status 404 returned error can't find the container with id 72aad5d4445bd5b4002525cca9b619c5846c864042c7e67d9d7ca0fb471565bd Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.225333 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf-cert\") pod \"openstack-operator-controller-manager-85c6dbf684-4l2bg\" (UID: \"bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf\") " pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.244620 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf-cert\") pod \"openstack-operator-controller-manager-85c6dbf684-4l2bg\" (UID: \"bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf\") " pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.326152 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.345970 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9"] Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.354300 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm"] Nov 21 20:23:40 crc kubenswrapper[4727]: W1121 20:23:40.356870 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1898484_e606_4222_9cd0_221a437d815c.slice/crio-a7ac74b68dbe23e6ac7346ab0131d85de8b2bb991122867af8cfab5f77554856 WatchSource:0}: Error finding container a7ac74b68dbe23e6ac7346ab0131d85de8b2bb991122867af8cfab5f77554856: Status 404 returned error can't find the container with id a7ac74b68dbe23e6ac7346ab0131d85de8b2bb991122867af8cfab5f77554856 Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.372875 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c"] Nov 21 20:23:40 crc kubenswrapper[4727]: W1121 20:23:40.378887 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5f9fd34_f218_4a3e_9456_fcd46dafb4ad.slice/crio-1a0c8297595788d1ec891fe6a774a61465139cd1c4bc6269c47d7bf44c447b8a WatchSource:0}: Error finding container 1a0c8297595788d1ec891fe6a774a61465139cd1c4bc6269c47d7bf44c447b8a: Status 404 returned error can't find the container with id 1a0c8297595788d1ec891fe6a774a61465139cd1c4bc6269c47d7bf44c447b8a Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.383261 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg"] Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.714235 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" event={"ID":"c5f9fd34-f218-4a3e-9456-fcd46dafb4ad","Type":"ContainerStarted","Data":"1a0c8297595788d1ec891fe6a774a61465139cd1c4bc6269c47d7bf44c447b8a"} Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.715765 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" event={"ID":"70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8","Type":"ContainerStarted","Data":"e67ad6cc61ebfce8d7a38fd35a4901e551dd844e895911f8d2f76f4cda8b8ff6"} Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.717079 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" event={"ID":"61cc6991-6299-4ec0-b51c-6130428804ac","Type":"ContainerStarted","Data":"72aad5d4445bd5b4002525cca9b619c5846c864042c7e67d9d7ca0fb471565bd"} Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.717968 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" event={"ID":"bb18a2fd-7fdc-4bc7-94b4-7da1dd327817","Type":"ContainerStarted","Data":"ca3948e967eaf18d00a5d445cd111d5d08bede6ea4d555d3758bf6936da1f6dd"} Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.719326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" event={"ID":"c1898484-e606-4222-9cd0-221a437d815c","Type":"ContainerStarted","Data":"a7ac74b68dbe23e6ac7346ab0131d85de8b2bb991122867af8cfab5f77554856"} Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.735592 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-57mrl\" (UID: \"8cbf6c6b-39fa-42bd-a722-0145c725d4cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.740787 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8cbf6c6b-39fa-42bd-a722-0145c725d4cf-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-57mrl\" (UID: \"8cbf6c6b-39fa-42bd-a722-0145c725d4cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:23:40 crc kubenswrapper[4727]: I1121 20:23:40.834462 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.004315 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g"] Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.015460 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q"] Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.038740 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4"] Nov 21 20:23:41 crc kubenswrapper[4727]: W1121 20:23:41.066171 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad6d8f3_0b4b_48bd_8cd7_c3fd216509bf.slice/crio-e83d8fbc1197d22f5fe7c1279b42d0aedc1e13671764e35566af48d3bf73b35b WatchSource:0}: Error finding container e83d8fbc1197d22f5fe7c1279b42d0aedc1e13671764e35566af48d3bf73b35b: Status 404 returned error can't find the container with id e83d8fbc1197d22f5fe7c1279b42d0aedc1e13671764e35566af48d3bf73b35b Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.066645 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb"] Nov 21 20:23:41 crc kubenswrapper[4727]: W1121 20:23:41.068429 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9af4ba92_9c28_4836_94ab_bf82bbf14047.slice/crio-31350d2aaacc6a2d4457f7530bc7e8699cda846840d85612f172e9c05c410ed1 WatchSource:0}: Error finding container 31350d2aaacc6a2d4457f7530bc7e8699cda846840d85612f172e9c05c410ed1: Status 404 returned error can't find the container with id 31350d2aaacc6a2d4457f7530bc7e8699cda846840d85612f172e9c05c410ed1 Nov 21 20:23:41 crc kubenswrapper[4727]: W1121 20:23:41.071105 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac64528_2d7a_466d_a0f9_5433cc0a482d.slice/crio-6b80954d9d7a20df268a8b2ea4bc1910dff4165d586448d39461ab872654e9c4 WatchSource:0}: Error finding container 6b80954d9d7a20df268a8b2ea4bc1910dff4165d586448d39461ab872654e9c4: Status 404 returned error can't find the container with id 6b80954d9d7a20df268a8b2ea4bc1910dff4165d586448d39461ab872654e9c4 Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.072625 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65"] Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.077760 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw"] Nov 21 20:23:41 crc kubenswrapper[4727]: W1121 20:23:41.078227 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff8b3ca8_d78f_448b_ace0_37bcc5408daf.slice/crio-69d6c88f9be4324bf9d772d70566ed4ae4142775ee8206ae5b446b74b96dd0b4 WatchSource:0}: Error finding container 69d6c88f9be4324bf9d772d70566ed4ae4142775ee8206ae5b446b74b96dd0b4: Status 404 returned error can't find the container with id 69d6c88f9be4324bf9d772d70566ed4ae4142775ee8206ae5b446b74b96dd0b4 Nov 21 20:23:41 crc kubenswrapper[4727]: W1121 20:23:41.081145 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7090ae5b_e03d_470b_b87e_31623d9916b7.slice/crio-73a5ef7b5b847fac636abb4b9663ab60e7ed18948b863ad78c0559bd1a1f99b2 WatchSource:0}: Error finding container 73a5ef7b5b847fac636abb4b9663ab60e7ed18948b863ad78c0559bd1a1f99b2: Status 404 returned error can't find the container with id 73a5ef7b5b847fac636abb4b9663ab60e7ed18948b863ad78c0559bd1a1f99b2 Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.083103 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9"] Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.089339 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-bn475"] Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.289048 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq"] Nov 21 20:23:41 crc kubenswrapper[4727]: W1121 20:23:41.292372 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac4c617b_d373_46ed_97e7_a8b1b8dae48b.slice/crio-e779ab4ea228d5f1cfeb68b79b9daba9c49923f8f27c45e538da74aef5949162 WatchSource:0}: Error finding container e779ab4ea228d5f1cfeb68b79b9daba9c49923f8f27c45e538da74aef5949162: Status 404 returned error can't find the container with id e779ab4ea228d5f1cfeb68b79b9daba9c49923f8f27c45e538da74aef5949162 Nov 21 20:23:41 crc kubenswrapper[4727]: W1121 20:23:41.302347 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb9d7c73_bf27_4b19_8aa6_d0c006a74309.slice/crio-c2d7142107f8348209c8679480437afd7de5e2ed09521ca36dc572752bedb77c WatchSource:0}: Error finding container c2d7142107f8348209c8679480437afd7de5e2ed09521ca36dc572752bedb77c: Status 404 returned error can't find the container with id c2d7142107f8348209c8679480437afd7de5e2ed09521ca36dc572752bedb77c Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.303757 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28"] Nov 21 20:23:41 crc kubenswrapper[4727]: E1121 20:23:41.309295 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wczlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-8c6448b9f-gkz28_openstack-operators(db9d7c73-bf27-4b19-8aa6-d0c006a74309): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.315948 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn"] Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.324499 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9"] Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.333514 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47"] Nov 21 20:23:41 crc kubenswrapper[4727]: E1121 20:23:41.341382 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.36:5001/openstack-k8s-operators/telemetry-operator:6280f54c4d86e239852669b9aa334e584f1fe080,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-28zp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-776668cd95-zhcl9_openstack-operators(2968b604-6efc-421c-8435-12cc7303a604): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 20:23:41 crc kubenswrapper[4727]: E1121 20:23:41.351313 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tfgg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-z2t47_openstack-operators(afbeb30f-0a67-4547-85e5-76cba72ee577): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 20:23:41 crc kubenswrapper[4727]: E1121 20:23:41.352702 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" podUID="afbeb30f-0a67-4547-85e5-76cba72ee577" Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.481458 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl"] Nov 21 20:23:41 crc kubenswrapper[4727]: W1121 20:23:41.484591 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cbf6c6b_39fa_42bd_a722_0145c725d4cf.slice/crio-651228a18e7d0675a7757dfc69c41e093b74b351793dc2b8e4992df3ff9810b2 WatchSource:0}: Error finding container 651228a18e7d0675a7757dfc69c41e093b74b351793dc2b8e4992df3ff9810b2: Status 404 returned error can't find the container with id 651228a18e7d0675a7757dfc69c41e093b74b351793dc2b8e4992df3ff9810b2 Nov 21 20:23:41 crc kubenswrapper[4727]: E1121 20:23:41.536871 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" podUID="2968b604-6efc-421c-8435-12cc7303a604" Nov 21 20:23:41 crc kubenswrapper[4727]: E1121 20:23:41.543869 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" podUID="db9d7c73-bf27-4b19-8aa6-d0c006a74309" Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.545286 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg"] Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.732349 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" event={"ID":"4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf","Type":"ContainerStarted","Data":"e83d8fbc1197d22f5fe7c1279b42d0aedc1e13671764e35566af48d3bf73b35b"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.738148 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" event={"ID":"ff8b3ca8-d78f-448b-ace0-37bcc5408daf","Type":"ContainerStarted","Data":"69d6c88f9be4324bf9d772d70566ed4ae4142775ee8206ae5b446b74b96dd0b4"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.740900 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" event={"ID":"8a5a299a-95be-4bfa-b3f4-c5a8bf71667a","Type":"ContainerStarted","Data":"0ce54e0da44c0776298ba58afefffdb5ac4b66e6389879b415794daac11ea945"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.743756 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" event={"ID":"8cbf6c6b-39fa-42bd-a722-0145c725d4cf","Type":"ContainerStarted","Data":"651228a18e7d0675a7757dfc69c41e093b74b351793dc2b8e4992df3ff9810b2"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.747332 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" event={"ID":"7090ae5b-e03d-470b-b87e-31623d9916b7","Type":"ContainerStarted","Data":"73a5ef7b5b847fac636abb4b9663ab60e7ed18948b863ad78c0559bd1a1f99b2"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.749549 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" event={"ID":"2968b604-6efc-421c-8435-12cc7303a604","Type":"ContainerStarted","Data":"923de373c0e32800ddad7aed352ede18c09fb6d6da15b3bfc059495e7f5fe2bd"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.749592 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" event={"ID":"2968b604-6efc-421c-8435-12cc7303a604","Type":"ContainerStarted","Data":"de0418b728d210db75c193c54e97856edd9a0a81ee851910699ef01b4838a646"} Nov 21 20:23:41 crc kubenswrapper[4727]: E1121 20:23:41.751036 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.36:5001/openstack-k8s-operators/telemetry-operator:6280f54c4d86e239852669b9aa334e584f1fe080\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" podUID="2968b604-6efc-421c-8435-12cc7303a604" Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.752757 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" event={"ID":"77869999-156b-4be3-a845-46914c095836","Type":"ContainerStarted","Data":"2a79c871fcdc987bb746a32af31b59ce038c58e37e87011c9c83c802d8f91c1a"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.753915 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" event={"ID":"9af4ba92-9c28-4836-94ab-bf82bbf14047","Type":"ContainerStarted","Data":"31350d2aaacc6a2d4457f7530bc7e8699cda846840d85612f172e9c05c410ed1"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.755539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" event={"ID":"afbeb30f-0a67-4547-85e5-76cba72ee577","Type":"ContainerStarted","Data":"e82b2bb92bcfe3c2e31f309f23b55233c8f525b7b0094cbf46bf6e2bfc262c3e"} Nov 21 20:23:41 crc kubenswrapper[4727]: E1121 20:23:41.756792 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" podUID="afbeb30f-0a67-4547-85e5-76cba72ee577" Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.758626 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" event={"ID":"db9d7c73-bf27-4b19-8aa6-d0c006a74309","Type":"ContainerStarted","Data":"5fbe58032da1709303e25a0f9d6a736be1c54f2249f72aeff9426e82fab805d1"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.758659 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" event={"ID":"db9d7c73-bf27-4b19-8aa6-d0c006a74309","Type":"ContainerStarted","Data":"c2d7142107f8348209c8679480437afd7de5e2ed09521ca36dc572752bedb77c"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.759768 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" event={"ID":"4ac64528-2d7a-466d-a0f9-5433cc0a482d","Type":"ContainerStarted","Data":"6b80954d9d7a20df268a8b2ea4bc1910dff4165d586448d39461ab872654e9c4"} Nov 21 20:23:41 crc kubenswrapper[4727]: E1121 20:23:41.760023 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" podUID="db9d7c73-bf27-4b19-8aa6-d0c006a74309" Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.761195 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" event={"ID":"bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf","Type":"ContainerStarted","Data":"d148c30eb8f1d983b468d81013b1c82a410fc87b55836ccbe36691b54d1feaff"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.762475 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" event={"ID":"e89e3523-4eb5-4ea3-997f-132e08f6ef4d","Type":"ContainerStarted","Data":"785398d5141bf0c438520578aa4ae20bf7b7b20fee13b2746ae0e900bce0e043"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.775259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" event={"ID":"6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa","Type":"ContainerStarted","Data":"118fb8dac575ecf51e7f2e3b4d25141ca844e10a12939fc78f5e7dc07f7d79d4"} Nov 21 20:23:41 crc kubenswrapper[4727]: I1121 20:23:41.778418 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" event={"ID":"ac4c617b-d373-46ed-97e7-a8b1b8dae48b","Type":"ContainerStarted","Data":"e779ab4ea228d5f1cfeb68b79b9daba9c49923f8f27c45e538da74aef5949162"} Nov 21 20:23:42 crc kubenswrapper[4727]: I1121 20:23:42.805028 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" event={"ID":"bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf","Type":"ContainerStarted","Data":"e7ff32d6c068a184f020b8b51b953ae68bd2c2701c392e4611d7d500a703983d"} Nov 21 20:23:42 crc kubenswrapper[4727]: I1121 20:23:42.805147 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" event={"ID":"bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf","Type":"ContainerStarted","Data":"d437f83c63c90cdf001a3ad79fe53627b9cb608a51876da0c5c6f9c47348a73e"} Nov 21 20:23:42 crc kubenswrapper[4727]: I1121 20:23:42.806373 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:23:42 crc kubenswrapper[4727]: E1121 20:23:42.808976 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" podUID="afbeb30f-0a67-4547-85e5-76cba72ee577" Nov 21 20:23:42 crc kubenswrapper[4727]: E1121 20:23:42.809294 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" podUID="db9d7c73-bf27-4b19-8aa6-d0c006a74309" Nov 21 20:23:42 crc kubenswrapper[4727]: E1121 20:23:42.809304 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.36:5001/openstack-k8s-operators/telemetry-operator:6280f54c4d86e239852669b9aa334e584f1fe080\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" podUID="2968b604-6efc-421c-8435-12cc7303a604" Nov 21 20:23:42 crc kubenswrapper[4727]: I1121 20:23:42.872802 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" podStartSLOduration=3.872784006 podStartE2EDuration="3.872784006s" podCreationTimestamp="2025-11-21 20:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:23:42.859635998 +0000 UTC m=+1028.045821102" watchObservedRunningTime="2025-11-21 20:23:42.872784006 +0000 UTC m=+1028.058969050" Nov 21 20:23:50 crc kubenswrapper[4727]: I1121 20:23:50.332743 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-85c6dbf684-4l2bg" Nov 21 20:24:02 crc kubenswrapper[4727]: E1121 20:24:02.255149 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd" Nov 21 20:24:02 crc kubenswrapper[4727]: E1121 20:24:02.256165 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8sbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-8c7444f48-57mrl_openstack-operators(8cbf6c6b-39fa-42bd-a722-0145c725d4cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:24:02 crc kubenswrapper[4727]: E1121 20:24:02.834105 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 21 20:24:02 crc kubenswrapper[4727]: E1121 20:24:02.834295 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9lwhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-54cfbf4c7d-649jq_openstack-operators(8a5a299a-95be-4bfa-b3f4-c5a8bf71667a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:24:03 crc kubenswrapper[4727]: E1121 20:24:03.690276 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 21 20:24:03 crc kubenswrapper[4727]: E1121 20:24:03.690485 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b4blq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-b4c496f69-bn475_openstack-operators(7090ae5b-e03d-470b-b87e-31623d9916b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:24:04 crc kubenswrapper[4727]: E1121 20:24:04.943939 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c" Nov 21 20:24:04 crc kubenswrapper[4727]: E1121 20:24:04.944511 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjmxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b797b8dff-dhl6q_openstack-operators(9af4ba92-9c28-4836-94ab-bf82bbf14047): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:24:05 crc kubenswrapper[4727]: E1121 20:24:05.345021 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" podUID="8a5a299a-95be-4bfa-b3f4-c5a8bf71667a" Nov 21 20:24:05 crc kubenswrapper[4727]: E1121 20:24:05.528755 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" podUID="8cbf6c6b-39fa-42bd-a722-0145c725d4cf" Nov 21 20:24:05 crc kubenswrapper[4727]: E1121 20:24:05.725772 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" podUID="9af4ba92-9c28-4836-94ab-bf82bbf14047" Nov 21 20:24:05 crc kubenswrapper[4727]: E1121 20:24:05.960061 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" podUID="7090ae5b-e03d-470b-b87e-31623d9916b7" Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.019135 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" event={"ID":"70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8","Type":"ContainerStarted","Data":"201dcae5d063c9616976a31f5fdfd4d6ce0fe6b0c37df3a4767bf0181b716764"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.023260 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" event={"ID":"8cbf6c6b-39fa-42bd-a722-0145c725d4cf","Type":"ContainerStarted","Data":"a1702f5c70bf2e820ae5d01b0af8699d541a25c23fccb0c94e7b19639e449d6b"} Nov 21 20:24:06 crc kubenswrapper[4727]: E1121 20:24:06.026924 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" podUID="8cbf6c6b-39fa-42bd-a722-0145c725d4cf" Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.030163 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" event={"ID":"3a00eb31-9565-4aad-bcea-bb52bc5bebdc","Type":"ContainerStarted","Data":"722141428c96aacc809c56efc5c4bdd37dc130c0ca1877b3f1e77a74fd68c319"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.032137 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" event={"ID":"bb18a2fd-7fdc-4bc7-94b4-7da1dd327817","Type":"ContainerStarted","Data":"ca53f440a7a8db871e8cba4790c175e35c93fe492005044a39e669d3da26c9ca"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.051836 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" event={"ID":"77869999-156b-4be3-a845-46914c095836","Type":"ContainerStarted","Data":"7bbce0d71eddec7d0f82fe6db91a82d08ceb139c08520eed35aa673f160c0418"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.063450 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" event={"ID":"4ac64528-2d7a-466d-a0f9-5433cc0a482d","Type":"ContainerStarted","Data":"9ebbd5f07d1fb2aa417f39aeac60f0921ceeb4d03726011a9bd14dc1191d8e8a"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.073429 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" event={"ID":"2413dc73-9b93-4670-ae64-86d116672e3c","Type":"ContainerStarted","Data":"cc692e5e6cd9007a0026ae639871b82dad0c3e3fc477bb4963aa2da0f4b39ce5"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.083903 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" event={"ID":"4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf","Type":"ContainerStarted","Data":"89aa14d5c15da30f6175e9affaf1b2438e8c117cf4e934eb65cfb5eb6670ccb5"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.093218 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" event={"ID":"c5f9fd34-f218-4a3e-9456-fcd46dafb4ad","Type":"ContainerStarted","Data":"334fdfb88527c4dc10ba390042bba14d02e3b8344fe686827308e6a49fefb3d0"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.096329 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" event={"ID":"8a5a299a-95be-4bfa-b3f4-c5a8bf71667a","Type":"ContainerStarted","Data":"769746e923a401bbde66ff9f9f8946a34eea64a757fb492ab40ee930676bedc1"} Nov 21 20:24:06 crc kubenswrapper[4727]: E1121 20:24:06.101941 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" podUID="8a5a299a-95be-4bfa-b3f4-c5a8bf71667a" Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.108183 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" event={"ID":"644c8e80-e81c-45f1-b5f8-d561cccc89cb","Type":"ContainerStarted","Data":"a0ad9fb7a12b7abd823e691ddcd088872ce0ff0ff455959ca603910bacafc99c"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.121823 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" event={"ID":"c1898484-e606-4222-9cd0-221a437d815c","Type":"ContainerStarted","Data":"3a0218d91e74da6f226dd53f630583d8382355e551ef765d1c3b439bd427e08d"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.134317 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" event={"ID":"9af4ba92-9c28-4836-94ab-bf82bbf14047","Type":"ContainerStarted","Data":"81a86fbb07d6a9056fe5d7739f163091479545b7ac6a81bff1f43058899a2d02"} Nov 21 20:24:06 crc kubenswrapper[4727]: E1121 20:24:06.136247 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" podUID="9af4ba92-9c28-4836-94ab-bf82bbf14047" Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.142258 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" event={"ID":"6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa","Type":"ContainerStarted","Data":"a5ee170ee0c42ededcae23d91c88247b68a9d493163283dc9ead605a5df5bc67"} Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.163038 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" event={"ID":"7090ae5b-e03d-470b-b87e-31623d9916b7","Type":"ContainerStarted","Data":"10c834d6d75b3e9807e0200c3622c4b0d7d52001a9c89cabb0abe26b1d94a0be"} Nov 21 20:24:06 crc kubenswrapper[4727]: E1121 20:24:06.166778 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" podUID="7090ae5b-e03d-470b-b87e-31623d9916b7" Nov 21 20:24:06 crc kubenswrapper[4727]: I1121 20:24:06.171726 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" event={"ID":"61cc6991-6299-4ec0-b51c-6130428804ac","Type":"ContainerStarted","Data":"214d1ebbfc1aff46ff1a896419f830c9a68c1803e701ff705720df0a1b86a81e"} Nov 21 20:24:07 crc kubenswrapper[4727]: I1121 20:24:07.205465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" event={"ID":"3a00eb31-9565-4aad-bcea-bb52bc5bebdc","Type":"ContainerStarted","Data":"615ba9264ddf8fdd42d72e1fec1a44df8adffdcc567d4043143d668082ed3c0f"} Nov 21 20:24:07 crc kubenswrapper[4727]: I1121 20:24:07.205811 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" Nov 21 20:24:07 crc kubenswrapper[4727]: I1121 20:24:07.207597 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" event={"ID":"e89e3523-4eb5-4ea3-997f-132e08f6ef4d","Type":"ContainerStarted","Data":"ec3c38748f18775bbe726bdf9914e909f5c1726a27af58b3e3a2fff788a721c9"} Nov 21 20:24:07 crc kubenswrapper[4727]: I1121 20:24:07.209919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" event={"ID":"ff8b3ca8-d78f-448b-ace0-37bcc5408daf","Type":"ContainerStarted","Data":"dbe5192a802ece8d67c245ee898b01a2414d1b5b57463716a306a07c380b084d"} Nov 21 20:24:07 crc kubenswrapper[4727]: I1121 20:24:07.212581 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" event={"ID":"ac4c617b-d373-46ed-97e7-a8b1b8dae48b","Type":"ContainerStarted","Data":"a92c362aeb8bdbe4806f4fc000f012a76f5dd5f6da1a92d8f8e7d4cb42075019"} Nov 21 20:24:07 crc kubenswrapper[4727]: I1121 20:24:07.215194 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" event={"ID":"c5f9fd34-f218-4a3e-9456-fcd46dafb4ad","Type":"ContainerStarted","Data":"8ac300dc1ada5930cd29a26230a47414b73da862d802e50929087004c4d82c18"} Nov 21 20:24:07 crc kubenswrapper[4727]: E1121 20:24:07.218914 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" podUID="9af4ba92-9c28-4836-94ab-bf82bbf14047" Nov 21 20:24:07 crc kubenswrapper[4727]: E1121 20:24:07.219348 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" podUID="7090ae5b-e03d-470b-b87e-31623d9916b7" Nov 21 20:24:07 crc kubenswrapper[4727]: E1121 20:24:07.219431 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" podUID="8a5a299a-95be-4bfa-b3f4-c5a8bf71667a" Nov 21 20:24:07 crc kubenswrapper[4727]: I1121 20:24:07.223789 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" podStartSLOduration=3.837028477 podStartE2EDuration="29.223778846s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:39.486298125 +0000 UTC m=+1024.672483169" lastFinishedPulling="2025-11-21 20:24:04.873048494 +0000 UTC m=+1050.059233538" observedRunningTime="2025-11-21 20:24:07.223465979 +0000 UTC m=+1052.409651033" watchObservedRunningTime="2025-11-21 20:24:07.223778846 +0000 UTC m=+1052.409963890" Nov 21 20:24:07 crc kubenswrapper[4727]: E1121 20:24:07.229501 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" podUID="8cbf6c6b-39fa-42bd-a722-0145c725d4cf" Nov 21 20:24:07 crc kubenswrapper[4727]: I1121 20:24:07.257288 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" podStartSLOduration=4.773989011 podStartE2EDuration="29.257254969s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:40.382787812 +0000 UTC m=+1025.568972856" lastFinishedPulling="2025-11-21 20:24:04.86605377 +0000 UTC m=+1050.052238814" observedRunningTime="2025-11-21 20:24:07.242347729 +0000 UTC m=+1052.428532773" watchObservedRunningTime="2025-11-21 20:24:07.257254969 +0000 UTC m=+1052.443440013" Nov 21 20:24:08 crc kubenswrapper[4727]: I1121 20:24:08.226123 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" event={"ID":"4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf","Type":"ContainerStarted","Data":"f6fc0a278e1ee77f7a143f5614aa4fd6f6e41614decd24096b6bf92a2d0e5463"} Nov 21 20:24:08 crc kubenswrapper[4727]: I1121 20:24:08.229034 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" Nov 21 20:24:08 crc kubenswrapper[4727]: I1121 20:24:08.234291 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" event={"ID":"61cc6991-6299-4ec0-b51c-6130428804ac","Type":"ContainerStarted","Data":"6fc79acb5514fb4fcc067935036e9d47bf997d8314e14ce96e36dbfa071dfca5"} Nov 21 20:24:08 crc kubenswrapper[4727]: I1121 20:24:08.235070 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" Nov 21 20:24:08 crc kubenswrapper[4727]: I1121 20:24:08.235117 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" Nov 21 20:24:08 crc kubenswrapper[4727]: I1121 20:24:08.248441 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" podStartSLOduration=6.421540488 podStartE2EDuration="30.248418319s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.069831522 +0000 UTC m=+1026.256016566" lastFinishedPulling="2025-11-21 20:24:04.896709353 +0000 UTC m=+1050.082894397" observedRunningTime="2025-11-21 20:24:08.244260676 +0000 UTC m=+1053.430445730" watchObservedRunningTime="2025-11-21 20:24:08.248418319 +0000 UTC m=+1053.434603363" Nov 21 20:24:08 crc kubenswrapper[4727]: I1121 20:24:08.266842 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" podStartSLOduration=5.285059371 podStartE2EDuration="30.266821687s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:39.873385514 +0000 UTC m=+1025.059570558" lastFinishedPulling="2025-11-21 20:24:04.85514783 +0000 UTC m=+1050.041332874" observedRunningTime="2025-11-21 20:24:08.262357666 +0000 UTC m=+1053.448542710" watchObservedRunningTime="2025-11-21 20:24:08.266821687 +0000 UTC m=+1053.453006731" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.266559 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" event={"ID":"77869999-156b-4be3-a845-46914c095836","Type":"ContainerStarted","Data":"9f12d4804fa21b0e4f63192bdd4602996f47252ea7ea0335214ed339f7ea900d"} Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.267127 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.269079 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" event={"ID":"70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8","Type":"ContainerStarted","Data":"5e8709e4712c9535c3f95a9427e6a441dfcbd037736459748f9a19ad0804c36d"} Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.269424 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.270132 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.272523 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" event={"ID":"4ac64528-2d7a-466d-a0f9-5433cc0a482d","Type":"ContainerStarted","Data":"ab3a278d22211fdcb7170e5958df17a42eed6577316d64c25b9db6db04571716"} Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.272649 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.272689 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.274335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" event={"ID":"bb18a2fd-7fdc-4bc7-94b4-7da1dd327817","Type":"ContainerStarted","Data":"0427936770d4bd6fb3c6bf3711067d4482ba26585c7088ac8283ec041615f930"} Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.274853 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.275495 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.276117 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" event={"ID":"c1898484-e606-4222-9cd0-221a437d815c","Type":"ContainerStarted","Data":"a479eef1858c21eb3a092a1f17e8b847f5b5e7c54231b43b27cd2ad2972fffe8"} Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.276535 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.280788 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-jxb65" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.281010 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.285249 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.291899 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-frkcb" podStartSLOduration=8.473709992 podStartE2EDuration="32.29188161s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.052603371 +0000 UTC m=+1026.238788415" lastFinishedPulling="2025-11-21 20:24:04.870774989 +0000 UTC m=+1050.056960033" observedRunningTime="2025-11-21 20:24:10.286735591 +0000 UTC m=+1055.472920635" watchObservedRunningTime="2025-11-21 20:24:10.29188161 +0000 UTC m=+1055.478066644" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.311928 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-tlj2c" podStartSLOduration=7.785819233 podStartE2EDuration="32.311912278s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:40.373037968 +0000 UTC m=+1025.559223012" lastFinishedPulling="2025-11-21 20:24:04.899131013 +0000 UTC m=+1050.085316057" observedRunningTime="2025-11-21 20:24:10.306345509 +0000 UTC m=+1055.492530563" watchObservedRunningTime="2025-11-21 20:24:10.311912278 +0000 UTC m=+1055.498097322" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.349437 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58f887965d-r4gvw" podStartSLOduration=8.550626838 podStartE2EDuration="32.349419161s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.073150455 +0000 UTC m=+1026.259335499" lastFinishedPulling="2025-11-21 20:24:04.871942778 +0000 UTC m=+1050.058127822" observedRunningTime="2025-11-21 20:24:10.329452663 +0000 UTC m=+1055.515637707" watchObservedRunningTime="2025-11-21 20:24:10.349419161 +0000 UTC m=+1055.535604205" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.390375 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-9g7x9" podStartSLOduration=7.8674696 podStartE2EDuration="32.390356988s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:40.359366296 +0000 UTC m=+1025.545551340" lastFinishedPulling="2025-11-21 20:24:04.882253684 +0000 UTC m=+1050.068438728" observedRunningTime="2025-11-21 20:24:10.347757188 +0000 UTC m=+1055.533942232" watchObservedRunningTime="2025-11-21 20:24:10.390356988 +0000 UTC m=+1055.576542022" Nov 21 20:24:10 crc kubenswrapper[4727]: I1121 20:24:10.394106 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-w9lxm" podStartSLOduration=7.863996075 podStartE2EDuration="32.394083941s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:40.365632763 +0000 UTC m=+1025.551817807" lastFinishedPulling="2025-11-21 20:24:04.895720629 +0000 UTC m=+1050.081905673" observedRunningTime="2025-11-21 20:24:10.370218578 +0000 UTC m=+1055.556403622" watchObservedRunningTime="2025-11-21 20:24:10.394083941 +0000 UTC m=+1055.580268985" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.298731 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" event={"ID":"2413dc73-9b93-4670-ae64-86d116672e3c","Type":"ContainerStarted","Data":"1053eff34201f20d764ea4e75c321658093b807307154399defc9cf242b585d6"} Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.299212 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.300949 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" event={"ID":"e89e3523-4eb5-4ea3-997f-132e08f6ef4d","Type":"ContainerStarted","Data":"9fcda68d28259170ab9d081b436163b33b7d97f243bc1000f82db1ff7632aad0"} Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.301083 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.301788 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.302698 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" event={"ID":"afbeb30f-0a67-4547-85e5-76cba72ee577","Type":"ContainerStarted","Data":"a6f39812d43e9c2580289ade707b8c21e80c396271611bd55faa81ffc9852d89"} Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.302749 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.304649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" event={"ID":"6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa","Type":"ContainerStarted","Data":"91dc7691d8043e324ac4a508655b6380f7d66ce254f3488a2a555d0ac6456eef"} Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.304863 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.306388 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" event={"ID":"db9d7c73-bf27-4b19-8aa6-d0c006a74309","Type":"ContainerStarted","Data":"594b29cd640e10f190fa991d1177b1de2a5464d8dd0e3a996b45bb7f48cf45a1"} Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.306617 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.306706 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.308281 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" event={"ID":"2968b604-6efc-421c-8435-12cc7303a604","Type":"ContainerStarted","Data":"bcef7a2ca40ab9059fc112e02fbfbaef6970b442f57ebcd617b028bdc283f0c6"} Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.308612 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.310046 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" event={"ID":"644c8e80-e81c-45f1-b5f8-d561cccc89cb","Type":"ContainerStarted","Data":"42d4370a788ee263e5a3cca457d6caaf802bd60ab0e201d5c47b2f26a9e53ca5"} Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.310263 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.311832 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.312641 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" event={"ID":"ff8b3ca8-d78f-448b-ace0-37bcc5408daf","Type":"ContainerStarted","Data":"162452a82284b1ac76ffc2e35fa6a7eae460ae828ac600729d2de5a05da8f68d"} Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.312798 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.314237 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" event={"ID":"ac4c617b-d373-46ed-97e7-a8b1b8dae48b","Type":"ContainerStarted","Data":"6babbebc02fb1d30803e38a51cd84e6251dfe6e4a9a519ea2264fe3400414da0"} Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.314478 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.314678 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.316167 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.329534 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5lhbb" podStartSLOduration=10.071214429 podStartE2EDuration="35.329513204s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:39.636191363 +0000 UTC m=+1024.822376407" lastFinishedPulling="2025-11-21 20:24:04.894490138 +0000 UTC m=+1050.080675182" observedRunningTime="2025-11-21 20:24:13.326865948 +0000 UTC m=+1058.513051002" watchObservedRunningTime="2025-11-21 20:24:13.329513204 +0000 UTC m=+1058.515698258" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.346523 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-h792g" podStartSLOduration=11.491209514 podStartE2EDuration="35.346479586s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.039142655 +0000 UTC m=+1026.225327699" lastFinishedPulling="2025-11-21 20:24:04.894412727 +0000 UTC m=+1050.080597771" observedRunningTime="2025-11-21 20:24:13.343390169 +0000 UTC m=+1058.529575213" watchObservedRunningTime="2025-11-21 20:24:13.346479586 +0000 UTC m=+1058.532664630" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.396375 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" podStartSLOduration=4.479398501 podStartE2EDuration="35.396357397s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.309156387 +0000 UTC m=+1026.495341431" lastFinishedPulling="2025-11-21 20:24:12.226115283 +0000 UTC m=+1057.412300327" observedRunningTime="2025-11-21 20:24:13.373951859 +0000 UTC m=+1058.560136903" watchObservedRunningTime="2025-11-21 20:24:13.396357397 +0000 UTC m=+1058.582542441" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.432256 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-xm2x9" podStartSLOduration=11.582351618 podStartE2EDuration="35.432224658s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.080675304 +0000 UTC m=+1026.266860348" lastFinishedPulling="2025-11-21 20:24:04.930548314 +0000 UTC m=+1050.116733388" observedRunningTime="2025-11-21 20:24:13.395266979 +0000 UTC m=+1058.581452023" watchObservedRunningTime="2025-11-21 20:24:13.432224658 +0000 UTC m=+1058.618409702" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.434336 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-gwm4n" podStartSLOduration=10.175766759 podStartE2EDuration="35.434326511s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:39.636722656 +0000 UTC m=+1024.822907700" lastFinishedPulling="2025-11-21 20:24:04.895282408 +0000 UTC m=+1050.081467452" observedRunningTime="2025-11-21 20:24:13.423753427 +0000 UTC m=+1058.609938461" watchObservedRunningTime="2025-11-21 20:24:13.434326511 +0000 UTC m=+1058.620511555" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.447804 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d656998f4-d7cz4" podStartSLOduration=11.611020868 podStartE2EDuration="35.447785206s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.060073818 +0000 UTC m=+1026.246258862" lastFinishedPulling="2025-11-21 20:24:04.896838156 +0000 UTC m=+1050.083023200" observedRunningTime="2025-11-21 20:24:13.441576731 +0000 UTC m=+1058.627761775" watchObservedRunningTime="2025-11-21 20:24:13.447785206 +0000 UTC m=+1058.633970260" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.478385 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" podStartSLOduration=4.667570565 podStartE2EDuration="35.478365456s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.341218069 +0000 UTC m=+1026.527403123" lastFinishedPulling="2025-11-21 20:24:12.15201297 +0000 UTC m=+1057.338198014" observedRunningTime="2025-11-21 20:24:13.472330076 +0000 UTC m=+1058.658515120" watchObservedRunningTime="2025-11-21 20:24:13.478365456 +0000 UTC m=+1058.664550500" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.522185 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-7j4bn" podStartSLOduration=11.945878508 podStartE2EDuration="35.522169136s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.294895901 +0000 UTC m=+1026.481080945" lastFinishedPulling="2025-11-21 20:24:04.871186529 +0000 UTC m=+1050.057371573" observedRunningTime="2025-11-21 20:24:13.520646278 +0000 UTC m=+1058.706831322" watchObservedRunningTime="2025-11-21 20:24:13.522169136 +0000 UTC m=+1058.708354180" Nov 21 20:24:13 crc kubenswrapper[4727]: I1121 20:24:13.574843 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-z2t47" podStartSLOduration=3.692633589 podStartE2EDuration="34.574821134s" podCreationTimestamp="2025-11-21 20:23:39 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.351179728 +0000 UTC m=+1026.537364772" lastFinishedPulling="2025-11-21 20:24:12.233367273 +0000 UTC m=+1057.419552317" observedRunningTime="2025-11-21 20:24:13.55733375 +0000 UTC m=+1058.743518794" watchObservedRunningTime="2025-11-21 20:24:13.574821134 +0000 UTC m=+1058.761006178" Nov 21 20:24:18 crc kubenswrapper[4727]: I1121 20:24:18.636279 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" Nov 21 20:24:18 crc kubenswrapper[4727]: I1121 20:24:18.783384 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7969689c84-4dtbh" Nov 21 20:24:18 crc kubenswrapper[4727]: I1121 20:24:18.846977 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-rk7gg" Nov 21 20:24:19 crc kubenswrapper[4727]: I1121 20:24:19.509485 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-776668cd95-zhcl9" Nov 21 20:24:19 crc kubenswrapper[4727]: I1121 20:24:19.637556 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-gkz28" Nov 21 20:24:20 crc kubenswrapper[4727]: I1121 20:24:20.369600 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" event={"ID":"7090ae5b-e03d-470b-b87e-31623d9916b7","Type":"ContainerStarted","Data":"450a60a06f3f8757014af1c41d0601b1a42d74fc71eccb805e6217ddb9a22477"} Nov 21 20:24:20 crc kubenswrapper[4727]: I1121 20:24:20.370094 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" Nov 21 20:24:20 crc kubenswrapper[4727]: I1121 20:24:20.382101 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" podStartSLOduration=3.507923279 podStartE2EDuration="42.382085469s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.084016207 +0000 UTC m=+1026.270201251" lastFinishedPulling="2025-11-21 20:24:19.958178397 +0000 UTC m=+1065.144363441" observedRunningTime="2025-11-21 20:24:20.381369032 +0000 UTC m=+1065.567554086" watchObservedRunningTime="2025-11-21 20:24:20.382085469 +0000 UTC m=+1065.568270513" Nov 21 20:24:21 crc kubenswrapper[4727]: I1121 20:24:21.384488 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" event={"ID":"8cbf6c6b-39fa-42bd-a722-0145c725d4cf","Type":"ContainerStarted","Data":"57d90954f9e185e8ff8799c5e53a65bd98b43985ebcf4ff08b506f03588a34ec"} Nov 21 20:24:21 crc kubenswrapper[4727]: I1121 20:24:21.385649 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:24:21 crc kubenswrapper[4727]: I1121 20:24:21.426447 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" podStartSLOduration=3.741540755 podStartE2EDuration="43.426426182s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.488572333 +0000 UTC m=+1026.674757377" lastFinishedPulling="2025-11-21 20:24:21.17345776 +0000 UTC m=+1066.359642804" observedRunningTime="2025-11-21 20:24:21.419761555 +0000 UTC m=+1066.605946619" watchObservedRunningTime="2025-11-21 20:24:21.426426182 +0000 UTC m=+1066.612611226" Nov 21 20:24:23 crc kubenswrapper[4727]: I1121 20:24:23.404276 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" event={"ID":"9af4ba92-9c28-4836-94ab-bf82bbf14047","Type":"ContainerStarted","Data":"5e8eeba59259097401839e1d8b35ddb2d123046a53e6f6e4822ae26d292a30b9"} Nov 21 20:24:23 crc kubenswrapper[4727]: I1121 20:24:23.404804 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" Nov 21 20:24:23 crc kubenswrapper[4727]: I1121 20:24:23.407423 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" event={"ID":"8a5a299a-95be-4bfa-b3f4-c5a8bf71667a","Type":"ContainerStarted","Data":"042a9f4429f0d1912ef2f34678e29e237c4ecd7388e527de10af2ae441443f8f"} Nov 21 20:24:23 crc kubenswrapper[4727]: I1121 20:24:23.407619 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" Nov 21 20:24:23 crc kubenswrapper[4727]: I1121 20:24:23.426338 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" podStartSLOduration=3.332053923 podStartE2EDuration="45.426320068s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.072057137 +0000 UTC m=+1026.258242181" lastFinishedPulling="2025-11-21 20:24:23.166323282 +0000 UTC m=+1068.352508326" observedRunningTime="2025-11-21 20:24:23.422509553 +0000 UTC m=+1068.608694597" watchObservedRunningTime="2025-11-21 20:24:23.426320068 +0000 UTC m=+1068.612505112" Nov 21 20:24:23 crc kubenswrapper[4727]: I1121 20:24:23.445545 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" podStartSLOduration=3.710413323 podStartE2EDuration="45.445522785s" podCreationTimestamp="2025-11-21 20:23:38 +0000 UTC" firstStartedPulling="2025-11-21 20:23:41.294323836 +0000 UTC m=+1026.480508880" lastFinishedPulling="2025-11-21 20:24:23.029433288 +0000 UTC m=+1068.215618342" observedRunningTime="2025-11-21 20:24:23.440811439 +0000 UTC m=+1068.626996483" watchObservedRunningTime="2025-11-21 20:24:23.445522785 +0000 UTC m=+1068.631707829" Nov 21 20:24:29 crc kubenswrapper[4727]: I1121 20:24:29.272231 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-649jq" Nov 21 20:24:29 crc kubenswrapper[4727]: I1121 20:24:29.408076 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-dhl6q" Nov 21 20:24:29 crc kubenswrapper[4727]: I1121 20:24:29.599037 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bn475" Nov 21 20:24:30 crc kubenswrapper[4727]: I1121 20:24:30.841777 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-57mrl" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.172904 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hv2qp"] Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.174816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.178828 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.179024 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-879kz" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.179412 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.179778 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.185550 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hv2qp"] Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.243469 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tc4jf"] Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.245063 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.248224 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.263630 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tc4jf"] Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.279435 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-config\") pod \"dnsmasq-dns-78dd6ddcc-tc4jf\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.279736 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tlqf\" (UniqueName: \"kubernetes.io/projected/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-kube-api-access-2tlqf\") pod \"dnsmasq-dns-675f4bcbfc-hv2qp\" (UID: \"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.279888 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-config\") pod \"dnsmasq-dns-675f4bcbfc-hv2qp\" (UID: \"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.280159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmzs\" (UniqueName: \"kubernetes.io/projected/fc04b326-2fd8-4935-bffc-b7ac77a9541b-kube-api-access-hcmzs\") pod \"dnsmasq-dns-78dd6ddcc-tc4jf\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.280304 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tc4jf\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.381730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-config\") pod \"dnsmasq-dns-78dd6ddcc-tc4jf\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.381785 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tlqf\" (UniqueName: \"kubernetes.io/projected/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-kube-api-access-2tlqf\") pod \"dnsmasq-dns-675f4bcbfc-hv2qp\" (UID: \"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.381810 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-config\") pod \"dnsmasq-dns-675f4bcbfc-hv2qp\" (UID: \"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.381857 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcmzs\" (UniqueName: \"kubernetes.io/projected/fc04b326-2fd8-4935-bffc-b7ac77a9541b-kube-api-access-hcmzs\") pod \"dnsmasq-dns-78dd6ddcc-tc4jf\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.381879 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tc4jf\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.382757 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tc4jf\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.382856 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-config\") pod \"dnsmasq-dns-78dd6ddcc-tc4jf\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.383059 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-config\") pod \"dnsmasq-dns-675f4bcbfc-hv2qp\" (UID: \"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.403811 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcmzs\" (UniqueName: \"kubernetes.io/projected/fc04b326-2fd8-4935-bffc-b7ac77a9541b-kube-api-access-hcmzs\") pod \"dnsmasq-dns-78dd6ddcc-tc4jf\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.421876 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tlqf\" (UniqueName: \"kubernetes.io/projected/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-kube-api-access-2tlqf\") pod \"dnsmasq-dns-675f4bcbfc-hv2qp\" (UID: \"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.498140 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.569020 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:24:48 crc kubenswrapper[4727]: I1121 20:24:48.954232 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hv2qp"] Nov 21 20:24:48 crc kubenswrapper[4727]: W1121 20:24:48.958326 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab4ee6b9_7913_47e4_b7ba_ac8925ed8715.slice/crio-701698034c82f2d902b03050799efe0cc88a0b9910b8f144fb49e7d8ae80eebf WatchSource:0}: Error finding container 701698034c82f2d902b03050799efe0cc88a0b9910b8f144fb49e7d8ae80eebf: Status 404 returned error can't find the container with id 701698034c82f2d902b03050799efe0cc88a0b9910b8f144fb49e7d8ae80eebf Nov 21 20:24:49 crc kubenswrapper[4727]: I1121 20:24:49.092682 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tc4jf"] Nov 21 20:24:49 crc kubenswrapper[4727]: W1121 20:24:49.100478 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc04b326_2fd8_4935_bffc_b7ac77a9541b.slice/crio-c2e27037af1a3a0b431e34a99203922efcb2e84f70804917e671ee9c82757122 WatchSource:0}: Error finding container c2e27037af1a3a0b431e34a99203922efcb2e84f70804917e671ee9c82757122: Status 404 returned error can't find the container with id c2e27037af1a3a0b431e34a99203922efcb2e84f70804917e671ee9c82757122 Nov 21 20:24:49 crc kubenswrapper[4727]: I1121 20:24:49.634624 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" event={"ID":"fc04b326-2fd8-4935-bffc-b7ac77a9541b","Type":"ContainerStarted","Data":"c2e27037af1a3a0b431e34a99203922efcb2e84f70804917e671ee9c82757122"} Nov 21 20:24:49 crc kubenswrapper[4727]: I1121 20:24:49.636860 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" event={"ID":"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715","Type":"ContainerStarted","Data":"701698034c82f2d902b03050799efe0cc88a0b9910b8f144fb49e7d8ae80eebf"} Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.217058 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hv2qp"] Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.243135 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wxjmx"] Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.245138 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.266768 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wxjmx"] Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.333316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxfx6\" (UniqueName: \"kubernetes.io/projected/37e9af95-0497-4e78-94f6-eb83cd6a6e80-kube-api-access-mxfx6\") pod \"dnsmasq-dns-666b6646f7-wxjmx\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.333381 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wxjmx\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.333462 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-config\") pod \"dnsmasq-dns-666b6646f7-wxjmx\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.436707 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxfx6\" (UniqueName: \"kubernetes.io/projected/37e9af95-0497-4e78-94f6-eb83cd6a6e80-kube-api-access-mxfx6\") pod \"dnsmasq-dns-666b6646f7-wxjmx\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.436764 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wxjmx\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.436797 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-config\") pod \"dnsmasq-dns-666b6646f7-wxjmx\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.438270 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wxjmx\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.450462 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-config\") pod \"dnsmasq-dns-666b6646f7-wxjmx\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.489357 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxfx6\" (UniqueName: \"kubernetes.io/projected/37e9af95-0497-4e78-94f6-eb83cd6a6e80-kube-api-access-mxfx6\") pod \"dnsmasq-dns-666b6646f7-wxjmx\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.490584 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tc4jf"] Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.527526 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs87m"] Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.529204 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.537509 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2w97\" (UniqueName: \"kubernetes.io/projected/14bdeda2-a618-4f47-ae48-87a7c611401e-kube-api-access-g2w97\") pod \"dnsmasq-dns-57d769cc4f-gs87m\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.537616 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gs87m\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.537657 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-config\") pod \"dnsmasq-dns-57d769cc4f-gs87m\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.543327 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs87m"] Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.576064 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.641343 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2w97\" (UniqueName: \"kubernetes.io/projected/14bdeda2-a618-4f47-ae48-87a7c611401e-kube-api-access-g2w97\") pod \"dnsmasq-dns-57d769cc4f-gs87m\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.641441 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gs87m\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.641483 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-config\") pod \"dnsmasq-dns-57d769cc4f-gs87m\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.642193 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-config\") pod \"dnsmasq-dns-57d769cc4f-gs87m\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.642687 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gs87m\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.704885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2w97\" (UniqueName: \"kubernetes.io/projected/14bdeda2-a618-4f47-ae48-87a7c611401e-kube-api-access-g2w97\") pod \"dnsmasq-dns-57d769cc4f-gs87m\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:51 crc kubenswrapper[4727]: I1121 20:24:51.861516 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.366055 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.368808 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.372033 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.372207 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.372243 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.372243 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.372328 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.372329 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.372888 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-c55kb" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.375609 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.560663 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/377d8548-a458-47c0-bd02-9904c8110d40-pod-info\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.560704 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.560858 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.560991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2hgc\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-kube-api-access-g2hgc\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.561189 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.561267 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.561363 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/377d8548-a458-47c0-bd02-9904c8110d40-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.561393 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.561449 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.561475 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-server-conf\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.561506 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-config-data\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663478 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/377d8548-a458-47c0-bd02-9904c8110d40-pod-info\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663537 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663587 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663631 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2hgc\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-kube-api-access-g2hgc\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663743 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663778 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663821 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/377d8548-a458-47c0-bd02-9904c8110d40-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663843 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663869 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663885 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-server-conf\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.663903 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-config-data\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.665391 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.667413 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.667891 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.668308 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.668561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-server-conf\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.668573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-config-data\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.669207 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/377d8548-a458-47c0-bd02-9904c8110d40-pod-info\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.673305 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.683867 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/377d8548-a458-47c0-bd02-9904c8110d40-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.684647 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.690692 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2hgc\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-kube-api-access-g2hgc\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.696590 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " pod="openstack/rabbitmq-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.716016 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.753977 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.758820 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.759075 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.759447 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.759582 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.759726 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.759913 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.760106 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6ht8p" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.773497 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.866879 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7fdf0962-de4d-4f58-87d3-a6458e4ff980-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.866937 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.866972 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.866997 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.867016 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7fdf0962-de4d-4f58-87d3-a6458e4ff980-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.867042 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn55s\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-kube-api-access-vn55s\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.867083 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.867107 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.867254 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.867273 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.867332 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.968761 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969124 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7fdf0962-de4d-4f58-87d3-a6458e4ff980-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969157 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969179 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969206 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969232 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7fdf0962-de4d-4f58-87d3-a6458e4ff980-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969262 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn55s\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-kube-api-access-vn55s\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969319 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969350 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969401 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969822 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.969836 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.970435 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.970662 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.970983 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.971577 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.973001 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.974158 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7fdf0962-de4d-4f58-87d3-a6458e4ff980-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.978573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.988393 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7fdf0962-de4d-4f58-87d3-a6458e4ff980-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.994473 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn55s\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-kube-api-access-vn55s\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:52 crc kubenswrapper[4727]: I1121 20:24:52.996365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.006878 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.086416 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.871644 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.873618 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.882666 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.882842 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.882981 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.882680 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.883747 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6hh56" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.886270 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.989512 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.989562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.989586 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-kolla-config\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.989846 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-config-data-default\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.990033 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.990082 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.990146 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:53 crc kubenswrapper[4727]: I1121 20:24:53.990203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz2qb\" (UniqueName: \"kubernetes.io/projected/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-kube-api-access-bz2qb\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.092162 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-config-data-default\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.092231 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.092266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.092307 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.092340 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz2qb\" (UniqueName: \"kubernetes.io/projected/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-kube-api-access-bz2qb\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.092444 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.092471 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.092503 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-kolla-config\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.092984 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.093046 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.093811 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-kolla-config\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.094145 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-config-data-default\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.094313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.102706 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.113337 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.113778 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz2qb\" (UniqueName: \"kubernetes.io/projected/0901b5dd-6fbf-4a40-8d26-ab792ea7f110-kube-api-access-bz2qb\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.117610 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0901b5dd-6fbf-4a40-8d26-ab792ea7f110\") " pod="openstack/openstack-galera-0" Nov 21 20:24:54 crc kubenswrapper[4727]: I1121 20:24:54.204717 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.483026 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.485009 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.486859 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.488652 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.488945 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.489160 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-crt6v" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.526771 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.620364 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.620412 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d83275c-cf9b-425e-8e63-6130e2866a49-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.620435 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d83275c-cf9b-425e-8e63-6130e2866a49-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.620602 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d83275c-cf9b-425e-8e63-6130e2866a49-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.620667 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmrgn\" (UniqueName: \"kubernetes.io/projected/3d83275c-cf9b-425e-8e63-6130e2866a49-kube-api-access-rmrgn\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.620731 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d83275c-cf9b-425e-8e63-6130e2866a49-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.620817 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d83275c-cf9b-425e-8e63-6130e2866a49-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.621070 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d83275c-cf9b-425e-8e63-6130e2866a49-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.650205 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.651583 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.654227 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.654485 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ptzgf" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.654563 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.659331 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.722881 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d83275c-cf9b-425e-8e63-6130e2866a49-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.722930 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2cd143a-46bd-4409-b5e9-91c1cb00e378-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.722984 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmrgn\" (UniqueName: \"kubernetes.io/projected/3d83275c-cf9b-425e-8e63-6130e2866a49-kube-api-access-rmrgn\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d83275c-cf9b-425e-8e63-6130e2866a49-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723079 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2cd143a-46bd-4409-b5e9-91c1cb00e378-kolla-config\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723106 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d83275c-cf9b-425e-8e63-6130e2866a49-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723157 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d83275c-cf9b-425e-8e63-6130e2866a49-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723197 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2cd143a-46bd-4409-b5e9-91c1cb00e378-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723220 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2cd143a-46bd-4409-b5e9-91c1cb00e378-config-data\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723257 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzw9\" (UniqueName: \"kubernetes.io/projected/b2cd143a-46bd-4409-b5e9-91c1cb00e378-kube-api-access-8bzw9\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723312 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723347 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d83275c-cf9b-425e-8e63-6130e2866a49-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723387 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d83275c-cf9b-425e-8e63-6130e2866a49-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723747 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d83275c-cf9b-425e-8e63-6130e2866a49-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.723783 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d83275c-cf9b-425e-8e63-6130e2866a49-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.724168 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.725555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d83275c-cf9b-425e-8e63-6130e2866a49-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.725557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d83275c-cf9b-425e-8e63-6130e2866a49-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.728552 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d83275c-cf9b-425e-8e63-6130e2866a49-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.737982 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmrgn\" (UniqueName: \"kubernetes.io/projected/3d83275c-cf9b-425e-8e63-6130e2866a49-kube-api-access-rmrgn\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.748936 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d83275c-cf9b-425e-8e63-6130e2866a49-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.770155 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d83275c-cf9b-425e-8e63-6130e2866a49\") " pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.807672 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.824748 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2cd143a-46bd-4409-b5e9-91c1cb00e378-config-data\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.824804 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bzw9\" (UniqueName: \"kubernetes.io/projected/b2cd143a-46bd-4409-b5e9-91c1cb00e378-kube-api-access-8bzw9\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.824895 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2cd143a-46bd-4409-b5e9-91c1cb00e378-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.824940 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2cd143a-46bd-4409-b5e9-91c1cb00e378-kolla-config\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.825013 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2cd143a-46bd-4409-b5e9-91c1cb00e378-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.826421 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2cd143a-46bd-4409-b5e9-91c1cb00e378-kolla-config\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.826639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2cd143a-46bd-4409-b5e9-91c1cb00e378-config-data\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.829348 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2cd143a-46bd-4409-b5e9-91c1cb00e378-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.829931 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2cd143a-46bd-4409-b5e9-91c1cb00e378-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.857713 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bzw9\" (UniqueName: \"kubernetes.io/projected/b2cd143a-46bd-4409-b5e9-91c1cb00e378-kube-api-access-8bzw9\") pod \"memcached-0\" (UID: \"b2cd143a-46bd-4409-b5e9-91c1cb00e378\") " pod="openstack/memcached-0" Nov 21 20:24:55 crc kubenswrapper[4727]: I1121 20:24:55.991707 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 20:24:57 crc kubenswrapper[4727]: I1121 20:24:57.564035 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 20:24:57 crc kubenswrapper[4727]: I1121 20:24:57.567724 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 20:24:57 crc kubenswrapper[4727]: I1121 20:24:57.571579 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-5rtzh" Nov 21 20:24:57 crc kubenswrapper[4727]: I1121 20:24:57.585440 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 20:24:57 crc kubenswrapper[4727]: I1121 20:24:57.678084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhq9n\" (UniqueName: \"kubernetes.io/projected/5a7c7dad-b024-4e09-b455-662514be19f2-kube-api-access-xhq9n\") pod \"kube-state-metrics-0\" (UID: \"5a7c7dad-b024-4e09-b455-662514be19f2\") " pod="openstack/kube-state-metrics-0" Nov 21 20:24:57 crc kubenswrapper[4727]: I1121 20:24:57.779895 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhq9n\" (UniqueName: \"kubernetes.io/projected/5a7c7dad-b024-4e09-b455-662514be19f2-kube-api-access-xhq9n\") pod \"kube-state-metrics-0\" (UID: \"5a7c7dad-b024-4e09-b455-662514be19f2\") " pod="openstack/kube-state-metrics-0" Nov 21 20:24:57 crc kubenswrapper[4727]: I1121 20:24:57.834079 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhq9n\" (UniqueName: \"kubernetes.io/projected/5a7c7dad-b024-4e09-b455-662514be19f2-kube-api-access-xhq9n\") pod \"kube-state-metrics-0\" (UID: \"5a7c7dad-b024-4e09-b455-662514be19f2\") " pod="openstack/kube-state-metrics-0" Nov 21 20:24:57 crc kubenswrapper[4727]: I1121 20:24:57.910575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.415122 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2"] Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.416399 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.419069 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.420096 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-9g9s7" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.428545 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2"] Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.493396 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2240b49-b00a-45c4-94fa-3acd3cb0e953-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-pcbt2\" (UID: \"a2240b49-b00a-45c4-94fa-3acd3cb0e953\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.493471 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzpsl\" (UniqueName: \"kubernetes.io/projected/a2240b49-b00a-45c4-94fa-3acd3cb0e953-kube-api-access-dzpsl\") pod \"observability-ui-dashboards-7d5fb4cbfb-pcbt2\" (UID: \"a2240b49-b00a-45c4-94fa-3acd3cb0e953\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.596381 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2240b49-b00a-45c4-94fa-3acd3cb0e953-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-pcbt2\" (UID: \"a2240b49-b00a-45c4-94fa-3acd3cb0e953\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" Nov 21 20:24:58 crc kubenswrapper[4727]: E1121 20:24:58.596613 4727 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Nov 21 20:24:58 crc kubenswrapper[4727]: E1121 20:24:58.596702 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2240b49-b00a-45c4-94fa-3acd3cb0e953-serving-cert podName:a2240b49-b00a-45c4-94fa-3acd3cb0e953 nodeName:}" failed. No retries permitted until 2025-11-21 20:24:59.096675366 +0000 UTC m=+1104.282860410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a2240b49-b00a-45c4-94fa-3acd3cb0e953-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-pcbt2" (UID: "a2240b49-b00a-45c4-94fa-3acd3cb0e953") : secret "observability-ui-dashboards" not found Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.597309 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpsl\" (UniqueName: \"kubernetes.io/projected/a2240b49-b00a-45c4-94fa-3acd3cb0e953-kube-api-access-dzpsl\") pod \"observability-ui-dashboards-7d5fb4cbfb-pcbt2\" (UID: \"a2240b49-b00a-45c4-94fa-3acd3cb0e953\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.649081 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzpsl\" (UniqueName: \"kubernetes.io/projected/a2240b49-b00a-45c4-94fa-3acd3cb0e953-kube-api-access-dzpsl\") pod \"observability-ui-dashboards-7d5fb4cbfb-pcbt2\" (UID: \"a2240b49-b00a-45c4-94fa-3acd3cb0e953\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.834612 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c4bb9cdb5-2gj6h"] Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.842596 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.852883 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c4bb9cdb5-2gj6h"] Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.908804 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/670b7614-70ca-48c1-9b70-53b2a7fb78e9-console-serving-cert\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.908860 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-console-config\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.908884 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-service-ca\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.908908 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pfnm\" (UniqueName: \"kubernetes.io/projected/670b7614-70ca-48c1-9b70-53b2a7fb78e9-kube-api-access-9pfnm\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.908951 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/670b7614-70ca-48c1-9b70-53b2a7fb78e9-console-oauth-config\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.909021 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-trusted-ca-bundle\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.909053 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-oauth-serving-cert\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.978294 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.981037 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:58 crc kubenswrapper[4727]: I1121 20:24:58.992750 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.000411 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9k7cn" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.000676 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.000827 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.001444 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.005360 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.006412 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.012270 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/670b7614-70ca-48c1-9b70-53b2a7fb78e9-console-oauth-config\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.012356 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-trusted-ca-bundle\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.012399 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-oauth-serving-cert\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.012452 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/670b7614-70ca-48c1-9b70-53b2a7fb78e9-console-serving-cert\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.012483 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-console-config\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.012501 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-service-ca\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.012526 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pfnm\" (UniqueName: \"kubernetes.io/projected/670b7614-70ca-48c1-9b70-53b2a7fb78e9-kube-api-access-9pfnm\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.015109 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-console-config\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.016106 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-trusted-ca-bundle\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.018555 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-oauth-serving-cert\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.023194 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/670b7614-70ca-48c1-9b70-53b2a7fb78e9-console-oauth-config\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.025389 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/670b7614-70ca-48c1-9b70-53b2a7fb78e9-service-ca\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.026063 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/670b7614-70ca-48c1-9b70-53b2a7fb78e9-console-serving-cert\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.045734 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pfnm\" (UniqueName: \"kubernetes.io/projected/670b7614-70ca-48c1-9b70-53b2a7fb78e9-kube-api-access-9pfnm\") pod \"console-6c4bb9cdb5-2gj6h\" (UID: \"670b7614-70ca-48c1-9b70-53b2a7fb78e9\") " pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.114035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2240b49-b00a-45c4-94fa-3acd3cb0e953-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-pcbt2\" (UID: \"a2240b49-b00a-45c4-94fa-3acd3cb0e953\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.114114 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.114146 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.114176 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.114198 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.114220 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.114242 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddm9\" (UniqueName: \"kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-kube-api-access-mddm9\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.114274 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.114294 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.118449 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2240b49-b00a-45c4-94fa-3acd3cb0e953-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-pcbt2\" (UID: \"a2240b49-b00a-45c4-94fa-3acd3cb0e953\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.175250 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.215484 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.215565 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.215691 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.215727 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.215761 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.215782 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.215803 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.215825 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddm9\" (UniqueName: \"kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-kube-api-access-mddm9\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.217073 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.220885 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.221256 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.222638 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.225833 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.225875 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9c78e1afd9d89ab6f94fe88434c0105238785281c928164a17724110dcc72275/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.227498 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.237681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.237729 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddm9\" (UniqueName: \"kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-kube-api-access-mddm9\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.296272 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") pod \"prometheus-metric-storage-0\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.302226 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 20:24:59 crc kubenswrapper[4727]: I1121 20:24:59.356284 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.542762 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-k8fk5"] Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.544364 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.548212 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.548529 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.548774 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qsf85" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.565432 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-w77gs"] Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.573610 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.574528 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k8fk5"] Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.586197 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w77gs"] Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661089 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5668e228-8946-468d-94e0-fa77489e46b3-scripts\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661146 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5668e228-8946-468d-94e0-fa77489e46b3-var-run-ovn\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661202 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76529\" (UniqueName: \"kubernetes.io/projected/5668e228-8946-468d-94e0-fa77489e46b3-kube-api-access-76529\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661241 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-var-log\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661267 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5668e228-8946-468d-94e0-fa77489e46b3-ovn-controller-tls-certs\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661428 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5668e228-8946-468d-94e0-fa77489e46b3-var-run\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661465 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-var-run\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661493 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3f53875-1801-45d3-aa31-4c307c620eec-scripts\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661600 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-var-lib\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5668e228-8946-468d-94e0-fa77489e46b3-var-log-ovn\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661859 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhw5t\" (UniqueName: \"kubernetes.io/projected/f3f53875-1801-45d3-aa31-4c307c620eec-kube-api-access-dhw5t\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.661936 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-etc-ovs\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.662095 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668e228-8946-468d-94e0-fa77489e46b3-combined-ca-bundle\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.764630 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668e228-8946-468d-94e0-fa77489e46b3-combined-ca-bundle\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.764728 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5668e228-8946-468d-94e0-fa77489e46b3-scripts\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.764757 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5668e228-8946-468d-94e0-fa77489e46b3-var-run-ovn\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.764800 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76529\" (UniqueName: \"kubernetes.io/projected/5668e228-8946-468d-94e0-fa77489e46b3-kube-api-access-76529\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.764835 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-var-log\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.764858 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5668e228-8946-468d-94e0-fa77489e46b3-ovn-controller-tls-certs\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.764887 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5668e228-8946-468d-94e0-fa77489e46b3-var-run\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.764907 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-var-run\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.764926 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3f53875-1801-45d3-aa31-4c307c620eec-scripts\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.764984 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-var-lib\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.765023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5668e228-8946-468d-94e0-fa77489e46b3-var-log-ovn\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.765050 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhw5t\" (UniqueName: \"kubernetes.io/projected/f3f53875-1801-45d3-aa31-4c307c620eec-kube-api-access-dhw5t\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.765079 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-etc-ovs\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.765718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-etc-ovs\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.767707 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5668e228-8946-468d-94e0-fa77489e46b3-var-run\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.768561 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5668e228-8946-468d-94e0-fa77489e46b3-var-run-ovn\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.768726 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-var-log\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.768862 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-var-lib\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.768916 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f3f53875-1801-45d3-aa31-4c307c620eec-var-run\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.770197 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5668e228-8946-468d-94e0-fa77489e46b3-scripts\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.771007 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3f53875-1801-45d3-aa31-4c307c620eec-scripts\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.771417 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5668e228-8946-468d-94e0-fa77489e46b3-var-log-ovn\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.773459 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5668e228-8946-468d-94e0-fa77489e46b3-combined-ca-bundle\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.773482 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5668e228-8946-468d-94e0-fa77489e46b3-ovn-controller-tls-certs\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.788068 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhw5t\" (UniqueName: \"kubernetes.io/projected/f3f53875-1801-45d3-aa31-4c307c620eec-kube-api-access-dhw5t\") pod \"ovn-controller-ovs-w77gs\" (UID: \"f3f53875-1801-45d3-aa31-4c307c620eec\") " pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.791078 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76529\" (UniqueName: \"kubernetes.io/projected/5668e228-8946-468d-94e0-fa77489e46b3-kube-api-access-76529\") pod \"ovn-controller-k8fk5\" (UID: \"5668e228-8946-468d-94e0-fa77489e46b3\") " pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.874152 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:01 crc kubenswrapper[4727]: I1121 20:25:01.894700 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.186305 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.188307 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.190724 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.193310 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.193428 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.193657 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-j7cgn" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.193679 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.223697 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.299661 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/40732896-dabf-47ad-bc39-236c51d78ef2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.299731 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h9w2\" (UniqueName: \"kubernetes.io/projected/40732896-dabf-47ad-bc39-236c51d78ef2-kube-api-access-8h9w2\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.299860 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/40732896-dabf-47ad-bc39-236c51d78ef2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.299890 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40732896-dabf-47ad-bc39-236c51d78ef2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.300072 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40732896-dabf-47ad-bc39-236c51d78ef2-config\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.300115 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.300138 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40732896-dabf-47ad-bc39-236c51d78ef2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.300341 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40732896-dabf-47ad-bc39-236c51d78ef2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.402092 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/40732896-dabf-47ad-bc39-236c51d78ef2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.402162 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40732896-dabf-47ad-bc39-236c51d78ef2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.402255 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40732896-dabf-47ad-bc39-236c51d78ef2-config\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.402311 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.402332 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40732896-dabf-47ad-bc39-236c51d78ef2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.402382 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40732896-dabf-47ad-bc39-236c51d78ef2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.402414 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/40732896-dabf-47ad-bc39-236c51d78ef2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.402444 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h9w2\" (UniqueName: \"kubernetes.io/projected/40732896-dabf-47ad-bc39-236c51d78ef2-kube-api-access-8h9w2\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.402648 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/40732896-dabf-47ad-bc39-236c51d78ef2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.403129 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.403401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40732896-dabf-47ad-bc39-236c51d78ef2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.403469 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40732896-dabf-47ad-bc39-236c51d78ef2-config\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.408574 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/40732896-dabf-47ad-bc39-236c51d78ef2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.409873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/40732896-dabf-47ad-bc39-236c51d78ef2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.413287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40732896-dabf-47ad-bc39-236c51d78ef2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.422736 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h9w2\" (UniqueName: \"kubernetes.io/projected/40732896-dabf-47ad-bc39-236c51d78ef2-kube-api-access-8h9w2\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.431425 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"40732896-dabf-47ad-bc39-236c51d78ef2\") " pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:03 crc kubenswrapper[4727]: I1121 20:25:03.523452 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.145652 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.147328 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.151331 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.151359 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.153295 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-kpk9q" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.153708 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.163210 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.220055 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.220177 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-config\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.220211 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.220238 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.220261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.220288 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.220370 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf6bf\" (UniqueName: \"kubernetes.io/projected/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-kube-api-access-hf6bf\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.220559 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.321768 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-config\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.321814 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.321835 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.321854 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.321874 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.321897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf6bf\" (UniqueName: \"kubernetes.io/projected/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-kube-api-access-hf6bf\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.321979 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.322039 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.322687 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-config\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.323015 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.323455 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.323629 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.326409 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.328589 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.339443 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf6bf\" (UniqueName: \"kubernetes.io/projected/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-kube-api-access-hf6bf\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.341268 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adec12e-c3fd-4d6a-bf0b-c38ac063f06c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.345635 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c\") " pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:04 crc kubenswrapper[4727]: I1121 20:25:04.474747 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:07 crc kubenswrapper[4727]: E1121 20:25:07.515938 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 21 20:25:07 crc kubenswrapper[4727]: E1121 20:25:07.516461 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcmzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-tc4jf_openstack(fc04b326-2fd8-4935-bffc-b7ac77a9541b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:25:07 crc kubenswrapper[4727]: E1121 20:25:07.517624 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" podUID="fc04b326-2fd8-4935-bffc-b7ac77a9541b" Nov 21 20:25:07 crc kubenswrapper[4727]: E1121 20:25:07.566891 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 21 20:25:07 crc kubenswrapper[4727]: E1121 20:25:07.567106 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tlqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-hv2qp_openstack(ab4ee6b9-7913-47e4-b7ba-ac8925ed8715): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:25:07 crc kubenswrapper[4727]: E1121 20:25:07.568462 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" podUID="ab4ee6b9-7913-47e4-b7ba-ac8925ed8715" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.148054 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs87m"] Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.163418 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.465084 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.490440 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.513259 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-config\") pod \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.514090 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-config" (OuterVolumeSpecName: "config") pod "fc04b326-2fd8-4935-bffc-b7ac77a9541b" (UID: "fc04b326-2fd8-4935-bffc-b7ac77a9541b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.514488 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcmzs\" (UniqueName: \"kubernetes.io/projected/fc04b326-2fd8-4935-bffc-b7ac77a9541b-kube-api-access-hcmzs\") pod \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.514743 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-dns-svc\") pod \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\" (UID: \"fc04b326-2fd8-4935-bffc-b7ac77a9541b\") " Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.515219 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc04b326-2fd8-4935-bffc-b7ac77a9541b" (UID: "fc04b326-2fd8-4935-bffc-b7ac77a9541b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.516095 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.516293 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc04b326-2fd8-4935-bffc-b7ac77a9541b-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.520130 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc04b326-2fd8-4935-bffc-b7ac77a9541b-kube-api-access-hcmzs" (OuterVolumeSpecName: "kube-api-access-hcmzs") pod "fc04b326-2fd8-4935-bffc-b7ac77a9541b" (UID: "fc04b326-2fd8-4935-bffc-b7ac77a9541b"). InnerVolumeSpecName "kube-api-access-hcmzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.617265 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-config\") pod \"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715\" (UID: \"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715\") " Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.617493 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tlqf\" (UniqueName: \"kubernetes.io/projected/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-kube-api-access-2tlqf\") pod \"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715\" (UID: \"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715\") " Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.617801 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-config" (OuterVolumeSpecName: "config") pod "ab4ee6b9-7913-47e4-b7ba-ac8925ed8715" (UID: "ab4ee6b9-7913-47e4-b7ba-ac8925ed8715"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.618225 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.618245 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcmzs\" (UniqueName: \"kubernetes.io/projected/fc04b326-2fd8-4935-bffc-b7ac77a9541b-kube-api-access-hcmzs\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.622821 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-kube-api-access-2tlqf" (OuterVolumeSpecName: "kube-api-access-2tlqf") pod "ab4ee6b9-7913-47e4-b7ba-ac8925ed8715" (UID: "ab4ee6b9-7913-47e4-b7ba-ac8925ed8715"). InnerVolumeSpecName "kube-api-access-2tlqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.720354 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tlqf\" (UniqueName: \"kubernetes.io/projected/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715-kube-api-access-2tlqf\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.857219 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"377d8548-a458-47c0-bd02-9904c8110d40","Type":"ContainerStarted","Data":"acd2fc25c5a5ee64561dfad705e8ecb57cdfebd9dd90410833b2de95b4ca193a"} Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.859593 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" event={"ID":"ab4ee6b9-7913-47e4-b7ba-ac8925ed8715","Type":"ContainerDied","Data":"701698034c82f2d902b03050799efe0cc88a0b9910b8f144fb49e7d8ae80eebf"} Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.859671 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hv2qp" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.879053 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" event={"ID":"14bdeda2-a618-4f47-ae48-87a7c611401e","Type":"ContainerStarted","Data":"caf725d3ef2b173e2aa4692163f0690ae3f4e2b567beb0d255b9dbddde1bf0ca"} Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.880846 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" event={"ID":"fc04b326-2fd8-4935-bffc-b7ac77a9541b","Type":"ContainerDied","Data":"c2e27037af1a3a0b431e34a99203922efcb2e84f70804917e671ee9c82757122"} Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.880902 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tc4jf" Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.949841 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.971490 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 20:25:08 crc kubenswrapper[4727]: W1121 20:25:08.989630 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2240b49_b00a_45c4_94fa_3acd3cb0e953.slice/crio-ceac52d5977546c3cc997d2872c2baedc32aff7b897b5998bb3939e8a13833e5 WatchSource:0}: Error finding container ceac52d5977546c3cc997d2872c2baedc32aff7b897b5998bb3939e8a13833e5: Status 404 returned error can't find the container with id ceac52d5977546c3cc997d2872c2baedc32aff7b897b5998bb3939e8a13833e5 Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.991584 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 20:25:08 crc kubenswrapper[4727]: W1121 20:25:08.992210 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0901b5dd_6fbf_4a40_8d26_ab792ea7f110.slice/crio-1c037291ce4de98280fb3e5b9994cb6ee77db3b430cf190afe264cc962c579e9 WatchSource:0}: Error finding container 1c037291ce4de98280fb3e5b9994cb6ee77db3b430cf190afe264cc962c579e9: Status 404 returned error can't find the container with id 1c037291ce4de98280fb3e5b9994cb6ee77db3b430cf190afe264cc962c579e9 Nov 21 20:25:08 crc kubenswrapper[4727]: W1121 20:25:08.996939 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37e9af95_0497_4e78_94f6_eb83cd6a6e80.slice/crio-eea3e0c9e49a53d2104c10f3f641f441702560e7c11e5d2f87019f8255632192 WatchSource:0}: Error finding container eea3e0c9e49a53d2104c10f3f641f441702560e7c11e5d2f87019f8255632192: Status 404 returned error can't find the container with id eea3e0c9e49a53d2104c10f3f641f441702560e7c11e5d2f87019f8255632192 Nov 21 20:25:08 crc kubenswrapper[4727]: I1121 20:25:08.999102 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2"] Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.009239 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wxjmx"] Nov 21 20:25:09 crc kubenswrapper[4727]: W1121 20:25:09.010563 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fdf0962_de4d_4f58_87d3_a6458e4ff980.slice/crio-1bd8f5195b78bd84a76e189ce623ccc459f059599030b216ee6e3b625235219d WatchSource:0}: Error finding container 1bd8f5195b78bd84a76e189ce623ccc459f059599030b216ee6e3b625235219d: Status 404 returned error can't find the container with id 1bd8f5195b78bd84a76e189ce623ccc459f059599030b216ee6e3b625235219d Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.020985 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 21 20:25:09 crc kubenswrapper[4727]: W1121 20:25:09.036841 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2cd143a_46bd_4409_b5e9_91c1cb00e378.slice/crio-c7c9f422db28d737cdd06f01fe35844955b078cc5d3abcb3f8b0e35d04ee5b4e WatchSource:0}: Error finding container c7c9f422db28d737cdd06f01fe35844955b078cc5d3abcb3f8b0e35d04ee5b4e: Status 404 returned error can't find the container with id c7c9f422db28d737cdd06f01fe35844955b078cc5d3abcb3f8b0e35d04ee5b4e Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.040558 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hv2qp"] Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.048155 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hv2qp"] Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.065453 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tc4jf"] Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.104685 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tc4jf"] Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.371721 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.386842 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c4bb9cdb5-2gj6h"] Nov 21 20:25:09 crc kubenswrapper[4727]: W1121 20:25:09.389407 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a7c7dad_b024_4e09_b455_662514be19f2.slice/crio-fd5ccff63ed5accc23567efcf005a49220024c2ac57f198c8ed10cf905c517f3 WatchSource:0}: Error finding container fd5ccff63ed5accc23567efcf005a49220024c2ac57f198c8ed10cf905c517f3: Status 404 returned error can't find the container with id fd5ccff63ed5accc23567efcf005a49220024c2ac57f198c8ed10cf905c517f3 Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.396113 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.411566 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k8fk5"] Nov 21 20:25:09 crc kubenswrapper[4727]: W1121 20:25:09.411623 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5668e228_8946_468d_94e0_fa77489e46b3.slice/crio-ad1bc3897738fa80f795de05618d56cd1887fff36338fa855405e6817a3d8a30 WatchSource:0}: Error finding container ad1bc3897738fa80f795de05618d56cd1887fff36338fa855405e6817a3d8a30: Status 404 returned error can't find the container with id ad1bc3897738fa80f795de05618d56cd1887fff36338fa855405e6817a3d8a30 Nov 21 20:25:09 crc kubenswrapper[4727]: W1121 20:25:09.415564 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod670b7614_70ca_48c1_9b70_53b2a7fb78e9.slice/crio-a62641e90946d4a1cab8783e688d0d4a4ef0359be0ae03531cf79c48f9975ab7 WatchSource:0}: Error finding container a62641e90946d4a1cab8783e688d0d4a4ef0359be0ae03531cf79c48f9975ab7: Status 404 returned error can't find the container with id a62641e90946d4a1cab8783e688d0d4a4ef0359be0ae03531cf79c48f9975ab7 Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.511917 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4ee6b9-7913-47e4-b7ba-ac8925ed8715" path="/var/lib/kubelet/pods/ab4ee6b9-7913-47e4-b7ba-ac8925ed8715/volumes" Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.512507 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc04b326-2fd8-4935-bffc-b7ac77a9541b" path="/var/lib/kubelet/pods/fc04b326-2fd8-4935-bffc-b7ac77a9541b/volumes" Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.763016 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w77gs"] Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.893992 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0901b5dd-6fbf-4a40-8d26-ab792ea7f110","Type":"ContainerStarted","Data":"1c037291ce4de98280fb3e5b9994cb6ee77db3b430cf190afe264cc962c579e9"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.896287 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9","Type":"ContainerStarted","Data":"b90d3e3a3d6e2a2ffac39a1a06436dbdeb46b9ad9f2cefb6cbdab9dd0657a43e"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.900096 4727 generic.go:334] "Generic (PLEG): container finished" podID="37e9af95-0497-4e78-94f6-eb83cd6a6e80" containerID="21819707af53e92c5fa9c544045ea6c1bdcb1dafbd03f63e5195af097ebbe096" exitCode=0 Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.900151 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" event={"ID":"37e9af95-0497-4e78-94f6-eb83cd6a6e80","Type":"ContainerDied","Data":"21819707af53e92c5fa9c544045ea6c1bdcb1dafbd03f63e5195af097ebbe096"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.900173 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" event={"ID":"37e9af95-0497-4e78-94f6-eb83cd6a6e80","Type":"ContainerStarted","Data":"eea3e0c9e49a53d2104c10f3f641f441702560e7c11e5d2f87019f8255632192"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.902565 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d83275c-cf9b-425e-8e63-6130e2866a49","Type":"ContainerStarted","Data":"0d0dd8fd5a488129b2423ede319bf088fe38fa80514ef2461ed78af1acccce30"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.904163 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k8fk5" event={"ID":"5668e228-8946-468d-94e0-fa77489e46b3","Type":"ContainerStarted","Data":"ad1bc3897738fa80f795de05618d56cd1887fff36338fa855405e6817a3d8a30"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.906635 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5a7c7dad-b024-4e09-b455-662514be19f2","Type":"ContainerStarted","Data":"fd5ccff63ed5accc23567efcf005a49220024c2ac57f198c8ed10cf905c517f3"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.909371 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c4bb9cdb5-2gj6h" event={"ID":"670b7614-70ca-48c1-9b70-53b2a7fb78e9","Type":"ContainerStarted","Data":"c4608eaa6a8a0e4f3800fc0a0d5aea096263a096b928a4d189139528bc0163dd"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.909410 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c4bb9cdb5-2gj6h" event={"ID":"670b7614-70ca-48c1-9b70-53b2a7fb78e9","Type":"ContainerStarted","Data":"a62641e90946d4a1cab8783e688d0d4a4ef0359be0ae03531cf79c48f9975ab7"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.911815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w77gs" event={"ID":"f3f53875-1801-45d3-aa31-4c307c620eec","Type":"ContainerStarted","Data":"5885dd92cba9981b685caf8f49e85a2539209047fdb4d007db9254b80324c624"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.918370 4727 generic.go:334] "Generic (PLEG): container finished" podID="14bdeda2-a618-4f47-ae48-87a7c611401e" containerID="8bf93c7428e1f62c9c25d622de36721cb9ab8c1a37101cec0965f7dc46596bdd" exitCode=0 Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.918415 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" event={"ID":"14bdeda2-a618-4f47-ae48-87a7c611401e","Type":"ContainerDied","Data":"8bf93c7428e1f62c9c25d622de36721cb9ab8c1a37101cec0965f7dc46596bdd"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.924624 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" event={"ID":"a2240b49-b00a-45c4-94fa-3acd3cb0e953","Type":"ContainerStarted","Data":"ceac52d5977546c3cc997d2872c2baedc32aff7b897b5998bb3939e8a13833e5"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.928921 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7fdf0962-de4d-4f58-87d3-a6458e4ff980","Type":"ContainerStarted","Data":"1bd8f5195b78bd84a76e189ce623ccc459f059599030b216ee6e3b625235219d"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.930930 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b2cd143a-46bd-4409-b5e9-91c1cb00e378","Type":"ContainerStarted","Data":"c7c9f422db28d737cdd06f01fe35844955b078cc5d3abcb3f8b0e35d04ee5b4e"} Nov 21 20:25:09 crc kubenswrapper[4727]: I1121 20:25:09.968757 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c4bb9cdb5-2gj6h" podStartSLOduration=11.968736427 podStartE2EDuration="11.968736427s" podCreationTimestamp="2025-11-21 20:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:25:09.956850543 +0000 UTC m=+1115.143035587" watchObservedRunningTime="2025-11-21 20:25:09.968736427 +0000 UTC m=+1115.154921471" Nov 21 20:25:10 crc kubenswrapper[4727]: I1121 20:25:10.340208 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 20:25:10 crc kubenswrapper[4727]: I1121 20:25:10.625006 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 20:25:10 crc kubenswrapper[4727]: I1121 20:25:10.945132 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" event={"ID":"37e9af95-0497-4e78-94f6-eb83cd6a6e80","Type":"ContainerStarted","Data":"710ac09994d88e3c5facef60e72c1f99721b3ddb9a56ab45313a483f5d300b96"} Nov 21 20:25:10 crc kubenswrapper[4727]: I1121 20:25:10.968874 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" podStartSLOduration=19.96884941 podStartE2EDuration="19.96884941s" podCreationTimestamp="2025-11-21 20:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:25:10.96357972 +0000 UTC m=+1116.149764774" watchObservedRunningTime="2025-11-21 20:25:10.96884941 +0000 UTC m=+1116.155034454" Nov 21 20:25:11 crc kubenswrapper[4727]: I1121 20:25:11.577671 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:25:12 crc kubenswrapper[4727]: I1121 20:25:12.962639 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c","Type":"ContainerStarted","Data":"675cf30c0501681afc3bb78d16d9a9ded50905b63c438e401e049d1abf876ce8"} Nov 21 20:25:12 crc kubenswrapper[4727]: I1121 20:25:12.963807 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"40732896-dabf-47ad-bc39-236c51d78ef2","Type":"ContainerStarted","Data":"8b8fc0617dc32bbdd0d7d2dfe0edd9b803f8c54f8df550f28632532880337846"} Nov 21 20:25:12 crc kubenswrapper[4727]: I1121 20:25:12.966595 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" event={"ID":"14bdeda2-a618-4f47-ae48-87a7c611401e","Type":"ContainerStarted","Data":"ae9aac57a0af8f07a45f31f441084e53370dbc27fee31ad29728a72cb262d25b"} Nov 21 20:25:12 crc kubenswrapper[4727]: I1121 20:25:12.966663 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:25:12 crc kubenswrapper[4727]: I1121 20:25:12.987759 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" podStartSLOduration=21.386749903 podStartE2EDuration="21.987741627s" podCreationTimestamp="2025-11-21 20:24:51 +0000 UTC" firstStartedPulling="2025-11-21 20:25:08.178108657 +0000 UTC m=+1113.364293701" lastFinishedPulling="2025-11-21 20:25:08.779100381 +0000 UTC m=+1113.965285425" observedRunningTime="2025-11-21 20:25:12.982103998 +0000 UTC m=+1118.168289062" watchObservedRunningTime="2025-11-21 20:25:12.987741627 +0000 UTC m=+1118.173926671" Nov 21 20:25:16 crc kubenswrapper[4727]: I1121 20:25:16.578337 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:25:19 crc kubenswrapper[4727]: I1121 20:25:19.175771 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:25:19 crc kubenswrapper[4727]: I1121 20:25:19.176117 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:25:19 crc kubenswrapper[4727]: I1121 20:25:19.180020 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:25:20 crc kubenswrapper[4727]: I1121 20:25:20.050570 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c4bb9cdb5-2gj6h" Nov 21 20:25:20 crc kubenswrapper[4727]: I1121 20:25:20.114537 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-777bc7647-cdqwj"] Nov 21 20:25:20 crc kubenswrapper[4727]: E1121 20:25:20.848036 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Nov 21 20:25:20 crc kubenswrapper[4727]: E1121 20:25:20.849111 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n575h8dh597h68h644h8dhc6h648h56h547h6dhc8hd8hfbh56h646h5b5h699h688hc5h645h5bbh588h5bch566h555hd8h654h568h67dh586h66bq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-76529,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-k8fk5_openstack(5668e228-8946-468d-94e0-fa77489e46b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:25:20 crc kubenswrapper[4727]: E1121 20:25:20.850297 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-k8fk5" podUID="5668e228-8946-468d-94e0-fa77489e46b3" Nov 21 20:25:21 crc kubenswrapper[4727]: E1121 20:25:21.059533 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-k8fk5" podUID="5668e228-8946-468d-94e0-fa77489e46b3" Nov 21 20:25:21 crc kubenswrapper[4727]: I1121 20:25:21.863108 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:25:21 crc kubenswrapper[4727]: I1121 20:25:21.910224 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wxjmx"] Nov 21 20:25:21 crc kubenswrapper[4727]: I1121 20:25:21.910489 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" podUID="37e9af95-0497-4e78-94f6-eb83cd6a6e80" containerName="dnsmasq-dns" containerID="cri-o://710ac09994d88e3c5facef60e72c1f99721b3ddb9a56ab45313a483f5d300b96" gracePeriod=10 Nov 21 20:25:22 crc kubenswrapper[4727]: I1121 20:25:22.071452 4727 generic.go:334] "Generic (PLEG): container finished" podID="37e9af95-0497-4e78-94f6-eb83cd6a6e80" containerID="710ac09994d88e3c5facef60e72c1f99721b3ddb9a56ab45313a483f5d300b96" exitCode=0 Nov 21 20:25:22 crc kubenswrapper[4727]: I1121 20:25:22.071792 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" event={"ID":"37e9af95-0497-4e78-94f6-eb83cd6a6e80","Type":"ContainerDied","Data":"710ac09994d88e3c5facef60e72c1f99721b3ddb9a56ab45313a483f5d300b96"} Nov 21 20:25:22 crc kubenswrapper[4727]: E1121 20:25:22.667627 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 21 20:25:22 crc kubenswrapper[4727]: E1121 20:25:22.667871 4727 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 21 20:25:22 crc kubenswrapper[4727]: E1121 20:25:22.668027 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhq9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(5a7c7dad-b024-4e09-b455-662514be19f2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 20:25:22 crc kubenswrapper[4727]: E1121 20:25:22.669228 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="5a7c7dad-b024-4e09-b455-662514be19f2" Nov 21 20:25:22 crc kubenswrapper[4727]: I1121 20:25:22.946362 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:25:23 crc kubenswrapper[4727]: I1121 20:25:23.061177 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-config\") pod \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " Nov 21 20:25:23 crc kubenswrapper[4727]: I1121 20:25:23.061565 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxfx6\" (UniqueName: \"kubernetes.io/projected/37e9af95-0497-4e78-94f6-eb83cd6a6e80-kube-api-access-mxfx6\") pod \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " Nov 21 20:25:23 crc kubenswrapper[4727]: I1121 20:25:23.061815 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-dns-svc\") pod \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\" (UID: \"37e9af95-0497-4e78-94f6-eb83cd6a6e80\") " Nov 21 20:25:23 crc kubenswrapper[4727]: I1121 20:25:23.081712 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" event={"ID":"37e9af95-0497-4e78-94f6-eb83cd6a6e80","Type":"ContainerDied","Data":"eea3e0c9e49a53d2104c10f3f641f441702560e7c11e5d2f87019f8255632192"} Nov 21 20:25:23 crc kubenswrapper[4727]: I1121 20:25:23.082342 4727 scope.go:117] "RemoveContainer" containerID="710ac09994d88e3c5facef60e72c1f99721b3ddb9a56ab45313a483f5d300b96" Nov 21 20:25:23 crc kubenswrapper[4727]: I1121 20:25:23.081748 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wxjmx" Nov 21 20:25:23 crc kubenswrapper[4727]: E1121 20:25:23.085385 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="5a7c7dad-b024-4e09-b455-662514be19f2" Nov 21 20:25:23 crc kubenswrapper[4727]: I1121 20:25:23.109987 4727 scope.go:117] "RemoveContainer" containerID="21819707af53e92c5fa9c544045ea6c1bdcb1dafbd03f63e5195af097ebbe096" Nov 21 20:25:23 crc kubenswrapper[4727]: I1121 20:25:23.180658 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e9af95-0497-4e78-94f6-eb83cd6a6e80-kube-api-access-mxfx6" (OuterVolumeSpecName: "kube-api-access-mxfx6") pod "37e9af95-0497-4e78-94f6-eb83cd6a6e80" (UID: "37e9af95-0497-4e78-94f6-eb83cd6a6e80"). InnerVolumeSpecName "kube-api-access-mxfx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:23 crc kubenswrapper[4727]: I1121 20:25:23.265660 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxfx6\" (UniqueName: \"kubernetes.io/projected/37e9af95-0497-4e78-94f6-eb83cd6a6e80-kube-api-access-mxfx6\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:24 crc kubenswrapper[4727]: I1121 20:25:24.112792 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.969480905 podStartE2EDuration="29.112767905s" podCreationTimestamp="2025-11-21 20:24:55 +0000 UTC" firstStartedPulling="2025-11-21 20:25:09.043525999 +0000 UTC m=+1114.229711043" lastFinishedPulling="2025-11-21 20:25:21.186812989 +0000 UTC m=+1126.372998043" observedRunningTime="2025-11-21 20:25:24.105538077 +0000 UTC m=+1129.291723121" watchObservedRunningTime="2025-11-21 20:25:24.112767905 +0000 UTC m=+1129.298952949" Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.126908 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" podStartSLOduration=14.942133464 podStartE2EDuration="27.126889235s" podCreationTimestamp="2025-11-21 20:24:58 +0000 UTC" firstStartedPulling="2025-11-21 20:25:09.002544319 +0000 UTC m=+1114.188729364" lastFinishedPulling="2025-11-21 20:25:21.187300091 +0000 UTC m=+1126.373485135" observedRunningTime="2025-11-21 20:25:25.121933252 +0000 UTC m=+1130.308118296" watchObservedRunningTime="2025-11-21 20:25:25.126889235 +0000 UTC m=+1130.313074279" Nov 21 20:25:25 crc kubenswrapper[4727]: E1121 20:25:25.278425 4727 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.78s" Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.278729 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.278808 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b2cd143a-46bd-4409-b5e9-91c1cb00e378","Type":"ContainerStarted","Data":"d573f83a0920d0735ce037b8fc281cf278ac58e3b348bc87cb037ef2dbbcbc99"} Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.278826 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-pcbt2" event={"ID":"a2240b49-b00a-45c4-94fa-3acd3cb0e953","Type":"ContainerStarted","Data":"149c5ce4fc2de0ccfff9ca54be6bfafd76ff485d74e5af732bd698d5dd821724"} Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.278842 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d83275c-cf9b-425e-8e63-6130e2866a49","Type":"ContainerStarted","Data":"c4065034445160df4160570e43371c2734f8131bb3f950c5431988ce203876dc"} Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.278861 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c","Type":"ContainerStarted","Data":"aa527cab0665b1cf345595108d59acb0cf55da71a4e54459a95a25864efb26a8"} Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.278872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"40732896-dabf-47ad-bc39-236c51d78ef2","Type":"ContainerStarted","Data":"be2762fc87e4172f856a91beb4baab4426223aafd942ff0e9b1838fef9d5cb80"} Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.278884 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0901b5dd-6fbf-4a40-8d26-ab792ea7f110","Type":"ContainerStarted","Data":"5f1695b4d5a3d60438131451f2be4365bd6a0ed1e2f0ac7eef1d3e4dbbcbf06d"} Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.279112 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w77gs" event={"ID":"f3f53875-1801-45d3-aa31-4c307c620eec","Type":"ContainerStarted","Data":"bbc1410a89519775a01790104beca124c00e2c69726dfc13a5b4f7d82c67347e"} Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.880730 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-config" (OuterVolumeSpecName: "config") pod "37e9af95-0497-4e78-94f6-eb83cd6a6e80" (UID: "37e9af95-0497-4e78-94f6-eb83cd6a6e80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.907521 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37e9af95-0497-4e78-94f6-eb83cd6a6e80" (UID: "37e9af95-0497-4e78-94f6-eb83cd6a6e80"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.920558 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:25 crc kubenswrapper[4727]: I1121 20:25:25.920580 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37e9af95-0497-4e78-94f6-eb83cd6a6e80-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:26 crc kubenswrapper[4727]: I1121 20:25:26.126284 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wxjmx"] Nov 21 20:25:26 crc kubenswrapper[4727]: I1121 20:25:26.131797 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wxjmx"] Nov 21 20:25:26 crc kubenswrapper[4727]: I1121 20:25:26.138487 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"377d8548-a458-47c0-bd02-9904c8110d40","Type":"ContainerStarted","Data":"510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2"} Nov 21 20:25:26 crc kubenswrapper[4727]: I1121 20:25:26.142281 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7fdf0962-de4d-4f58-87d3-a6458e4ff980","Type":"ContainerStarted","Data":"1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa"} Nov 21 20:25:27 crc kubenswrapper[4727]: I1121 20:25:27.156062 4727 generic.go:334] "Generic (PLEG): container finished" podID="f3f53875-1801-45d3-aa31-4c307c620eec" containerID="bbc1410a89519775a01790104beca124c00e2c69726dfc13a5b4f7d82c67347e" exitCode=0 Nov 21 20:25:27 crc kubenswrapper[4727]: I1121 20:25:27.156103 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w77gs" event={"ID":"f3f53875-1801-45d3-aa31-4c307c620eec","Type":"ContainerDied","Data":"bbc1410a89519775a01790104beca124c00e2c69726dfc13a5b4f7d82c67347e"} Nov 21 20:25:27 crc kubenswrapper[4727]: I1121 20:25:27.159896 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9","Type":"ContainerStarted","Data":"a60035ecfc2b7d39e6d8ab8abf474eb842e9c0917ab85f7855e713d1e2f4b603"} Nov 21 20:25:27 crc kubenswrapper[4727]: I1121 20:25:27.515551 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e9af95-0497-4e78-94f6-eb83cd6a6e80" path="/var/lib/kubelet/pods/37e9af95-0497-4e78-94f6-eb83cd6a6e80/volumes" Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.178643 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w77gs" event={"ID":"f3f53875-1801-45d3-aa31-4c307c620eec","Type":"ContainerStarted","Data":"58df4fcde207bc1b9d426bd44922fae0e611dfc757d77071f1f306adf777db82"} Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.179011 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w77gs" event={"ID":"f3f53875-1801-45d3-aa31-4c307c620eec","Type":"ContainerStarted","Data":"a80caaa62700d94265ff0a502f616c4d8775201a1446d986056db4a41b0b03ff"} Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.179229 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.181325 4727 generic.go:334] "Generic (PLEG): container finished" podID="3d83275c-cf9b-425e-8e63-6130e2866a49" containerID="c4065034445160df4160570e43371c2734f8131bb3f950c5431988ce203876dc" exitCode=0 Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.181377 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d83275c-cf9b-425e-8e63-6130e2866a49","Type":"ContainerDied","Data":"c4065034445160df4160570e43371c2734f8131bb3f950c5431988ce203876dc"} Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.183989 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5adec12e-c3fd-4d6a-bf0b-c38ac063f06c","Type":"ContainerStarted","Data":"d0af6d7f8c4e5f8b7513962697c52e0097a91ed89c43d2bd5e5ba3edd457ecb1"} Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.185762 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"40732896-dabf-47ad-bc39-236c51d78ef2","Type":"ContainerStarted","Data":"78478d1bb798002910d828631d9099823841febe61e50b6080661e414888ce12"} Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.187485 4727 generic.go:334] "Generic (PLEG): container finished" podID="0901b5dd-6fbf-4a40-8d26-ab792ea7f110" containerID="5f1695b4d5a3d60438131451f2be4365bd6a0ed1e2f0ac7eef1d3e4dbbcbf06d" exitCode=0 Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.187526 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0901b5dd-6fbf-4a40-8d26-ab792ea7f110","Type":"ContainerDied","Data":"5f1695b4d5a3d60438131451f2be4365bd6a0ed1e2f0ac7eef1d3e4dbbcbf06d"} Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.209217 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-w77gs" podStartSLOduration=16.405841305 podStartE2EDuration="28.209198505s" podCreationTimestamp="2025-11-21 20:25:01 +0000 UTC" firstStartedPulling="2025-11-21 20:25:09.797554956 +0000 UTC m=+1114.983740000" lastFinishedPulling="2025-11-21 20:25:21.600912166 +0000 UTC m=+1126.787097200" observedRunningTime="2025-11-21 20:25:29.198656966 +0000 UTC m=+1134.384842010" watchObservedRunningTime="2025-11-21 20:25:29.209198505 +0000 UTC m=+1134.395383549" Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.225449 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.410232271 podStartE2EDuration="27.225427146s" podCreationTimestamp="2025-11-21 20:25:02 +0000 UTC" firstStartedPulling="2025-11-21 20:25:12.302328691 +0000 UTC m=+1117.488513735" lastFinishedPulling="2025-11-21 20:25:28.117523566 +0000 UTC m=+1133.303708610" observedRunningTime="2025-11-21 20:25:29.217362677 +0000 UTC m=+1134.403547711" watchObservedRunningTime="2025-11-21 20:25:29.225427146 +0000 UTC m=+1134.411612190" Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.287744 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.487087625000001 podStartE2EDuration="26.287722401s" podCreationTimestamp="2025-11-21 20:25:03 +0000 UTC" firstStartedPulling="2025-11-21 20:25:12.303067029 +0000 UTC m=+1117.489252073" lastFinishedPulling="2025-11-21 20:25:28.103701805 +0000 UTC m=+1133.289886849" observedRunningTime="2025-11-21 20:25:29.286803979 +0000 UTC m=+1134.472989033" watchObservedRunningTime="2025-11-21 20:25:29.287722401 +0000 UTC m=+1134.473907445" Nov 21 20:25:29 crc kubenswrapper[4727]: I1121 20:25:29.475794 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:30 crc kubenswrapper[4727]: I1121 20:25:30.200031 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0901b5dd-6fbf-4a40-8d26-ab792ea7f110","Type":"ContainerStarted","Data":"879ac2e5bc7ef047b16c96bc7dde6ec118310b3f63b2429dd18c51aa11a22204"} Nov 21 20:25:30 crc kubenswrapper[4727]: I1121 20:25:30.204541 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d83275c-cf9b-425e-8e63-6130e2866a49","Type":"ContainerStarted","Data":"7892085858f2612614747a028f1710e03a05c4a51c687a7ac4461b460ce942fd"} Nov 21 20:25:30 crc kubenswrapper[4727]: I1121 20:25:30.204716 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:25:30 crc kubenswrapper[4727]: I1121 20:25:30.232727 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=25.373672211 podStartE2EDuration="38.232703525s" podCreationTimestamp="2025-11-21 20:24:52 +0000 UTC" firstStartedPulling="2025-11-21 20:25:08.994354307 +0000 UTC m=+1114.180539351" lastFinishedPulling="2025-11-21 20:25:21.853385621 +0000 UTC m=+1127.039570665" observedRunningTime="2025-11-21 20:25:30.230486831 +0000 UTC m=+1135.416671905" watchObservedRunningTime="2025-11-21 20:25:30.232703525 +0000 UTC m=+1135.418888609" Nov 21 20:25:30 crc kubenswrapper[4727]: I1121 20:25:30.265499 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.631215152 podStartE2EDuration="36.265480064s" podCreationTimestamp="2025-11-21 20:24:54 +0000 UTC" firstStartedPulling="2025-11-21 20:25:08.966550222 +0000 UTC m=+1114.152735266" lastFinishedPulling="2025-11-21 20:25:21.600815144 +0000 UTC m=+1126.787000178" observedRunningTime="2025-11-21 20:25:30.260221194 +0000 UTC m=+1135.446406238" watchObservedRunningTime="2025-11-21 20:25:30.265480064 +0000 UTC m=+1135.451665108" Nov 21 20:25:30 crc kubenswrapper[4727]: I1121 20:25:30.524370 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:30 crc kubenswrapper[4727]: I1121 20:25:30.605755 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:30 crc kubenswrapper[4727]: I1121 20:25:30.993823 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.213337 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.257422 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.475779 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.520571 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.543810 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh5mr"] Nov 21 20:25:31 crc kubenswrapper[4727]: E1121 20:25:31.544400 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e9af95-0497-4e78-94f6-eb83cd6a6e80" containerName="init" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.544425 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e9af95-0497-4e78-94f6-eb83cd6a6e80" containerName="init" Nov 21 20:25:31 crc kubenswrapper[4727]: E1121 20:25:31.544445 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e9af95-0497-4e78-94f6-eb83cd6a6e80" containerName="dnsmasq-dns" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.544454 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e9af95-0497-4e78-94f6-eb83cd6a6e80" containerName="dnsmasq-dns" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.544690 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e9af95-0497-4e78-94f6-eb83cd6a6e80" containerName="dnsmasq-dns" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.545908 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.551405 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.572628 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh5mr"] Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.649762 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-kl4qp"] Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.651214 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.655756 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.659573 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kl4qp"] Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.662719 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmkmg\" (UniqueName: \"kubernetes.io/projected/9318e137-c813-46d7-a1f4-e87b38ad8411-kube-api-access-gmkmg\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.662823 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.662863 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.662893 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-config\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.765415 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.765529 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2e37b08f-0bcc-4c28-8113-65a424a49717-ovn-rundir\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.765563 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-config\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.765625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e37b08f-0bcc-4c28-8113-65a424a49717-combined-ca-bundle\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.765688 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2e37b08f-0bcc-4c28-8113-65a424a49717-ovs-rundir\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.765897 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9n8p\" (UniqueName: \"kubernetes.io/projected/2e37b08f-0bcc-4c28-8113-65a424a49717-kube-api-access-b9n8p\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.765999 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmkmg\" (UniqueName: \"kubernetes.io/projected/9318e137-c813-46d7-a1f4-e87b38ad8411-kube-api-access-gmkmg\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.766192 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e37b08f-0bcc-4c28-8113-65a424a49717-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.766311 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.766339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e37b08f-0bcc-4c28-8113-65a424a49717-config\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.766350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.766429 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-config\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.767158 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.815262 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmkmg\" (UniqueName: \"kubernetes.io/projected/9318e137-c813-46d7-a1f4-e87b38ad8411-kube-api-access-gmkmg\") pod \"dnsmasq-dns-5bf47b49b7-rh5mr\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.869239 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e37b08f-0bcc-4c28-8113-65a424a49717-combined-ca-bundle\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.869313 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2e37b08f-0bcc-4c28-8113-65a424a49717-ovs-rundir\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.869408 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9n8p\" (UniqueName: \"kubernetes.io/projected/2e37b08f-0bcc-4c28-8113-65a424a49717-kube-api-access-b9n8p\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.869480 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e37b08f-0bcc-4c28-8113-65a424a49717-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.869534 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e37b08f-0bcc-4c28-8113-65a424a49717-config\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.869571 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2e37b08f-0bcc-4c28-8113-65a424a49717-ovn-rundir\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.869899 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2e37b08f-0bcc-4c28-8113-65a424a49717-ovn-rundir\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.872408 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.873171 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2e37b08f-0bcc-4c28-8113-65a424a49717-ovs-rundir\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.876690 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e37b08f-0bcc-4c28-8113-65a424a49717-config\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.883999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e37b08f-0bcc-4c28-8113-65a424a49717-combined-ca-bundle\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.887313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e37b08f-0bcc-4c28-8113-65a424a49717-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.894013 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9n8p\" (UniqueName: \"kubernetes.io/projected/2e37b08f-0bcc-4c28-8113-65a424a49717-kube-api-access-b9n8p\") pod \"ovn-controller-metrics-kl4qp\" (UID: \"2e37b08f-0bcc-4c28-8113-65a424a49717\") " pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:31 crc kubenswrapper[4727]: I1121 20:25:31.973559 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kl4qp" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.050204 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh5mr"] Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.066727 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-bhnxh"] Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.068853 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.070653 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.084213 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bhnxh"] Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.185094 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-494q2\" (UniqueName: \"kubernetes.io/projected/93c80b8a-f7fd-4877-a7de-aeaf68575dde-kube-api-access-494q2\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.185141 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.185334 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.185382 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-dns-svc\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.185430 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-config\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.287025 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.287074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-dns-svc\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.287125 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-config\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.287327 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-494q2\" (UniqueName: \"kubernetes.io/projected/93c80b8a-f7fd-4877-a7de-aeaf68575dde-kube-api-access-494q2\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.287351 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.288205 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.289817 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-dns-svc\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.290361 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-config\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.290416 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.290455 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.307727 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-494q2\" (UniqueName: \"kubernetes.io/projected/93c80b8a-f7fd-4877-a7de-aeaf68575dde-kube-api-access-494q2\") pod \"dnsmasq-dns-8554648995-bhnxh\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.429546 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.478738 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.480463 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.483572 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.483911 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.484034 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.484435 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mm84t" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.491925 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 20:25:32 crc kubenswrapper[4727]: W1121 20:25:32.565579 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9318e137_c813_46d7_a1f4_e87b38ad8411.slice/crio-32c968add95bfaa512c6d9f63a54da6f5aead429eefe02e7cf08f93fe5cdb883 WatchSource:0}: Error finding container 32c968add95bfaa512c6d9f63a54da6f5aead429eefe02e7cf08f93fe5cdb883: Status 404 returned error can't find the container with id 32c968add95bfaa512c6d9f63a54da6f5aead429eefe02e7cf08f93fe5cdb883 Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.568114 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh5mr"] Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.591099 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57623363-7d60-4219-93b9-c8678ed13f8f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.591150 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57623363-7d60-4219-93b9-c8678ed13f8f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.591199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57623363-7d60-4219-93b9-c8678ed13f8f-scripts\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.591275 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmtzh\" (UniqueName: \"kubernetes.io/projected/57623363-7d60-4219-93b9-c8678ed13f8f-kube-api-access-pmtzh\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.591304 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/57623363-7d60-4219-93b9-c8678ed13f8f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.591371 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57623363-7d60-4219-93b9-c8678ed13f8f-config\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.591418 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57623363-7d60-4219-93b9-c8678ed13f8f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.650406 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kl4qp"] Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.693109 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57623363-7d60-4219-93b9-c8678ed13f8f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.693163 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57623363-7d60-4219-93b9-c8678ed13f8f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.693194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57623363-7d60-4219-93b9-c8678ed13f8f-scripts\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.693237 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmtzh\" (UniqueName: \"kubernetes.io/projected/57623363-7d60-4219-93b9-c8678ed13f8f-kube-api-access-pmtzh\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.693259 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/57623363-7d60-4219-93b9-c8678ed13f8f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.693288 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57623363-7d60-4219-93b9-c8678ed13f8f-config\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.693885 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57623363-7d60-4219-93b9-c8678ed13f8f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.695785 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57623363-7d60-4219-93b9-c8678ed13f8f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.702413 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57623363-7d60-4219-93b9-c8678ed13f8f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.703190 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57623363-7d60-4219-93b9-c8678ed13f8f-config\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.703360 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57623363-7d60-4219-93b9-c8678ed13f8f-scripts\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.709706 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/57623363-7d60-4219-93b9-c8678ed13f8f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.720929 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57623363-7d60-4219-93b9-c8678ed13f8f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.721457 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmtzh\" (UniqueName: \"kubernetes.io/projected/57623363-7d60-4219-93b9-c8678ed13f8f-kube-api-access-pmtzh\") pod \"ovn-northd-0\" (UID: \"57623363-7d60-4219-93b9-c8678ed13f8f\") " pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.797582 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 20:25:32 crc kubenswrapper[4727]: I1121 20:25:32.920005 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bhnxh"] Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.238824 4727 generic.go:334] "Generic (PLEG): container finished" podID="9318e137-c813-46d7-a1f4-e87b38ad8411" containerID="4e9390532b2d71cc61c56d83e7607bc9c7544ccb483196d8bd20af027d6a6511" exitCode=0 Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.238868 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" event={"ID":"9318e137-c813-46d7-a1f4-e87b38ad8411","Type":"ContainerDied","Data":"4e9390532b2d71cc61c56d83e7607bc9c7544ccb483196d8bd20af027d6a6511"} Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.239106 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" event={"ID":"9318e137-c813-46d7-a1f4-e87b38ad8411","Type":"ContainerStarted","Data":"32c968add95bfaa512c6d9f63a54da6f5aead429eefe02e7cf08f93fe5cdb883"} Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.241124 4727 generic.go:334] "Generic (PLEG): container finished" podID="93c80b8a-f7fd-4877-a7de-aeaf68575dde" containerID="1f2068061b5d263cc2e641c0a12f5a56d57761a71ec8c362ade7ed7ee5c4fede" exitCode=0 Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.241219 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bhnxh" event={"ID":"93c80b8a-f7fd-4877-a7de-aeaf68575dde","Type":"ContainerDied","Data":"1f2068061b5d263cc2e641c0a12f5a56d57761a71ec8c362ade7ed7ee5c4fede"} Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.241252 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bhnxh" event={"ID":"93c80b8a-f7fd-4877-a7de-aeaf68575dde","Type":"ContainerStarted","Data":"8f57da04cca2c9e19aea8d26154aa9431d7c395a081514c88ba80ec4fd4d5f1c"} Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.249544 4727 generic.go:334] "Generic (PLEG): container finished" podID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerID="a60035ecfc2b7d39e6d8ab8abf474eb842e9c0917ab85f7855e713d1e2f4b603" exitCode=0 Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.249587 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9","Type":"ContainerDied","Data":"a60035ecfc2b7d39e6d8ab8abf474eb842e9c0917ab85f7855e713d1e2f4b603"} Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.299443 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kl4qp" event={"ID":"2e37b08f-0bcc-4c28-8113-65a424a49717","Type":"ContainerStarted","Data":"59e0ed1ad7b641d0579cf895641676e7788b9691e2a93bb4bc581d7c7068ed00"} Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.299671 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kl4qp" event={"ID":"2e37b08f-0bcc-4c28-8113-65a424a49717","Type":"ContainerStarted","Data":"3ded1eebb9a3896999b188af5a1661175d8628acb8f8fc2e43a82534a067d2d1"} Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.311672 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.351704 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-kl4qp" podStartSLOduration=2.351682841 podStartE2EDuration="2.351682841s" podCreationTimestamp="2025-11-21 20:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:25:33.329277748 +0000 UTC m=+1138.515462792" watchObservedRunningTime="2025-11-21 20:25:33.351682841 +0000 UTC m=+1138.537867885" Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.555684 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.714863 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-dns-svc\") pod \"9318e137-c813-46d7-a1f4-e87b38ad8411\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.715301 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmkmg\" (UniqueName: \"kubernetes.io/projected/9318e137-c813-46d7-a1f4-e87b38ad8411-kube-api-access-gmkmg\") pod \"9318e137-c813-46d7-a1f4-e87b38ad8411\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.715323 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-ovsdbserver-nb\") pod \"9318e137-c813-46d7-a1f4-e87b38ad8411\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.715421 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-config\") pod \"9318e137-c813-46d7-a1f4-e87b38ad8411\" (UID: \"9318e137-c813-46d7-a1f4-e87b38ad8411\") " Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.721670 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9318e137-c813-46d7-a1f4-e87b38ad8411-kube-api-access-gmkmg" (OuterVolumeSpecName: "kube-api-access-gmkmg") pod "9318e137-c813-46d7-a1f4-e87b38ad8411" (UID: "9318e137-c813-46d7-a1f4-e87b38ad8411"). InnerVolumeSpecName "kube-api-access-gmkmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.739895 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9318e137-c813-46d7-a1f4-e87b38ad8411" (UID: "9318e137-c813-46d7-a1f4-e87b38ad8411"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.741003 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9318e137-c813-46d7-a1f4-e87b38ad8411" (UID: "9318e137-c813-46d7-a1f4-e87b38ad8411"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.751761 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-config" (OuterVolumeSpecName: "config") pod "9318e137-c813-46d7-a1f4-e87b38ad8411" (UID: "9318e137-c813-46d7-a1f4-e87b38ad8411"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.818740 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.818779 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmkmg\" (UniqueName: \"kubernetes.io/projected/9318e137-c813-46d7-a1f4-e87b38ad8411-kube-api-access-gmkmg\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.818793 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:33 crc kubenswrapper[4727]: I1121 20:25:33.818808 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9318e137-c813-46d7-a1f4-e87b38ad8411-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.211940 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.212019 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.307494 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"57623363-7d60-4219-93b9-c8678ed13f8f","Type":"ContainerStarted","Data":"114945fcb99e8c5231579f742e4b43397509633f64dd72b74efbb33a1d85acbf"} Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.310301 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" event={"ID":"9318e137-c813-46d7-a1f4-e87b38ad8411","Type":"ContainerDied","Data":"32c968add95bfaa512c6d9f63a54da6f5aead429eefe02e7cf08f93fe5cdb883"} Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.310353 4727 scope.go:117] "RemoveContainer" containerID="4e9390532b2d71cc61c56d83e7607bc9c7544ccb483196d8bd20af027d6a6511" Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.310392 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-rh5mr" Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.313150 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k8fk5" event={"ID":"5668e228-8946-468d-94e0-fa77489e46b3","Type":"ContainerStarted","Data":"b5d18ea1c2e9db2cc8e16fa4936f7324e91a58d463cfbd2c19233f377ecd4035"} Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.314143 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-k8fk5" Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.317314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bhnxh" event={"ID":"93c80b8a-f7fd-4877-a7de-aeaf68575dde","Type":"ContainerStarted","Data":"3d888b49ca276c7340d6823a156dd93d76ef10256e9c4a551829a9325712add9"} Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.317516 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.349076 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-k8fk5" podStartSLOduration=9.782166158 podStartE2EDuration="33.349045516s" podCreationTimestamp="2025-11-21 20:25:01 +0000 UTC" firstStartedPulling="2025-11-21 20:25:09.413970091 +0000 UTC m=+1114.600155135" lastFinishedPulling="2025-11-21 20:25:32.980849449 +0000 UTC m=+1138.167034493" observedRunningTime="2025-11-21 20:25:34.337725777 +0000 UTC m=+1139.523911051" watchObservedRunningTime="2025-11-21 20:25:34.349045516 +0000 UTC m=+1139.535230570" Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.375911 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-bhnxh" podStartSLOduration=2.375891259 podStartE2EDuration="2.375891259s" podCreationTimestamp="2025-11-21 20:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:25:34.370073314 +0000 UTC m=+1139.556258368" watchObservedRunningTime="2025-11-21 20:25:34.375891259 +0000 UTC m=+1139.562076303" Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.422913 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh5mr"] Nov 21 20:25:34 crc kubenswrapper[4727]: I1121 20:25:34.428980 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh5mr"] Nov 21 20:25:35 crc kubenswrapper[4727]: I1121 20:25:35.548393 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9318e137-c813-46d7-a1f4-e87b38ad8411" path="/var/lib/kubelet/pods/9318e137-c813-46d7-a1f4-e87b38ad8411/volumes" Nov 21 20:25:35 crc kubenswrapper[4727]: I1121 20:25:35.808601 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 21 20:25:35 crc kubenswrapper[4727]: I1121 20:25:35.808666 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 21 20:25:35 crc kubenswrapper[4727]: I1121 20:25:35.892494 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 21 20:25:36 crc kubenswrapper[4727]: I1121 20:25:36.345375 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"57623363-7d60-4219-93b9-c8678ed13f8f","Type":"ContainerStarted","Data":"d51c1def38fc95625a80bd0c2767b7d13fce4ce3146b891350804556d63a658d"} Nov 21 20:25:36 crc kubenswrapper[4727]: I1121 20:25:36.449325 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 21 20:25:36 crc kubenswrapper[4727]: I1121 20:25:36.505436 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 21 20:25:36 crc kubenswrapper[4727]: I1121 20:25:36.567315 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 21 20:25:37 crc kubenswrapper[4727]: I1121 20:25:37.367606 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"57623363-7d60-4219-93b9-c8678ed13f8f","Type":"ContainerStarted","Data":"bcee58c02abcf19c56f2c993beb436327ff22aaa6514fdac3ad7e13b7d07786c"} Nov 21 20:25:37 crc kubenswrapper[4727]: I1121 20:25:37.798447 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 21 20:25:37 crc kubenswrapper[4727]: I1121 20:25:37.915292 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.217984746 podStartE2EDuration="5.915264435s" podCreationTimestamp="2025-11-21 20:25:32 +0000 UTC" firstStartedPulling="2025-11-21 20:25:33.323467885 +0000 UTC m=+1138.509652929" lastFinishedPulling="2025-11-21 20:25:36.020747574 +0000 UTC m=+1141.206932618" observedRunningTime="2025-11-21 20:25:37.404462555 +0000 UTC m=+1142.590647599" watchObservedRunningTime="2025-11-21 20:25:37.915264435 +0000 UTC m=+1143.101449489" Nov 21 20:25:37 crc kubenswrapper[4727]: I1121 20:25:37.945398 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bhnxh"] Nov 21 20:25:37 crc kubenswrapper[4727]: I1121 20:25:37.948664 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-bhnxh" podUID="93c80b8a-f7fd-4877-a7de-aeaf68575dde" containerName="dnsmasq-dns" containerID="cri-o://3d888b49ca276c7340d6823a156dd93d76ef10256e9c4a551829a9325712add9" gracePeriod=10 Nov 21 20:25:37 crc kubenswrapper[4727]: I1121 20:25:37.993050 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kg8vm"] Nov 21 20:25:37 crc kubenswrapper[4727]: E1121 20:25:37.993469 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9318e137-c813-46d7-a1f4-e87b38ad8411" containerName="init" Nov 21 20:25:37 crc kubenswrapper[4727]: I1121 20:25:37.993486 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9318e137-c813-46d7-a1f4-e87b38ad8411" containerName="init" Nov 21 20:25:37 crc kubenswrapper[4727]: I1121 20:25:37.993689 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9318e137-c813-46d7-a1f4-e87b38ad8411" containerName="init" Nov 21 20:25:37 crc kubenswrapper[4727]: I1121 20:25:37.994702 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.056253 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-drvbw"] Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.057584 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-drvbw" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.143008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9lt8\" (UniqueName: \"kubernetes.io/projected/03533ce4-f69e-4a18-8b64-754b2ed7f789-kube-api-access-c9lt8\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.143075 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-config\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.143108 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.143210 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.143234 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mxpq\" (UniqueName: \"kubernetes.io/projected/2ce9ffea-4675-4ed2-9b15-8a584708e173-kube-api-access-5mxpq\") pod \"mysqld-exporter-openstack-db-create-drvbw\" (UID: \"2ce9ffea-4675-4ed2-9b15-8a584708e173\") " pod="openstack/mysqld-exporter-openstack-db-create-drvbw" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.143257 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.143294 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce9ffea-4675-4ed2-9b15-8a584708e173-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-drvbw\" (UID: \"2ce9ffea-4675-4ed2-9b15-8a584708e173\") " pod="openstack/mysqld-exporter-openstack-db-create-drvbw" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.154095 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-drvbw"] Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.226045 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-c5eb-account-create-94gh4"] Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.227587 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kg8vm"] Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.227690 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.233130 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.252287 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce9ffea-4675-4ed2-9b15-8a584708e173-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-drvbw\" (UID: \"2ce9ffea-4675-4ed2-9b15-8a584708e173\") " pod="openstack/mysqld-exporter-openstack-db-create-drvbw" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.252639 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9lt8\" (UniqueName: \"kubernetes.io/projected/03533ce4-f69e-4a18-8b64-754b2ed7f789-kube-api-access-c9lt8\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.252770 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-config\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.252896 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.253138 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.253250 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mxpq\" (UniqueName: \"kubernetes.io/projected/2ce9ffea-4675-4ed2-9b15-8a584708e173-kube-api-access-5mxpq\") pod \"mysqld-exporter-openstack-db-create-drvbw\" (UID: \"2ce9ffea-4675-4ed2-9b15-8a584708e173\") " pod="openstack/mysqld-exporter-openstack-db-create-drvbw" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.253356 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.254772 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.262106 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-config\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.262784 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce9ffea-4675-4ed2-9b15-8a584708e173-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-drvbw\" (UID: \"2ce9ffea-4675-4ed2-9b15-8a584708e173\") " pod="openstack/mysqld-exporter-openstack-db-create-drvbw" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.264177 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.267868 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.292900 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c5eb-account-create-94gh4"] Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.299920 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9lt8\" (UniqueName: \"kubernetes.io/projected/03533ce4-f69e-4a18-8b64-754b2ed7f789-kube-api-access-c9lt8\") pod \"dnsmasq-dns-b8fbc5445-kg8vm\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.317001 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mxpq\" (UniqueName: \"kubernetes.io/projected/2ce9ffea-4675-4ed2-9b15-8a584708e173-kube-api-access-5mxpq\") pod \"mysqld-exporter-openstack-db-create-drvbw\" (UID: \"2ce9ffea-4675-4ed2-9b15-8a584708e173\") " pod="openstack/mysqld-exporter-openstack-db-create-drvbw" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.356262 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527jj\" (UniqueName: \"kubernetes.io/projected/7c9240ef-7d44-4563-a271-6a540b902f9b-kube-api-access-527jj\") pod \"mysqld-exporter-c5eb-account-create-94gh4\" (UID: \"7c9240ef-7d44-4563-a271-6a540b902f9b\") " pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.356648 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c9240ef-7d44-4563-a271-6a540b902f9b-operator-scripts\") pod \"mysqld-exporter-c5eb-account-create-94gh4\" (UID: \"7c9240ef-7d44-4563-a271-6a540b902f9b\") " pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.358674 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.392083 4727 generic.go:334] "Generic (PLEG): container finished" podID="93c80b8a-f7fd-4877-a7de-aeaf68575dde" containerID="3d888b49ca276c7340d6823a156dd93d76ef10256e9c4a551829a9325712add9" exitCode=0 Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.392369 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bhnxh" event={"ID":"93c80b8a-f7fd-4877-a7de-aeaf68575dde","Type":"ContainerDied","Data":"3d888b49ca276c7340d6823a156dd93d76ef10256e9c4a551829a9325712add9"} Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.438259 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-drvbw" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.458038 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c9240ef-7d44-4563-a271-6a540b902f9b-operator-scripts\") pod \"mysqld-exporter-c5eb-account-create-94gh4\" (UID: \"7c9240ef-7d44-4563-a271-6a540b902f9b\") " pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.458158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-527jj\" (UniqueName: \"kubernetes.io/projected/7c9240ef-7d44-4563-a271-6a540b902f9b-kube-api-access-527jj\") pod \"mysqld-exporter-c5eb-account-create-94gh4\" (UID: \"7c9240ef-7d44-4563-a271-6a540b902f9b\") " pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.459363 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c9240ef-7d44-4563-a271-6a540b902f9b-operator-scripts\") pod \"mysqld-exporter-c5eb-account-create-94gh4\" (UID: \"7c9240ef-7d44-4563-a271-6a540b902f9b\") " pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.485828 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-527jj\" (UniqueName: \"kubernetes.io/projected/7c9240ef-7d44-4563-a271-6a540b902f9b-kube-api-access-527jj\") pod \"mysqld-exporter-c5eb-account-create-94gh4\" (UID: \"7c9240ef-7d44-4563-a271-6a540b902f9b\") " pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.583834 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.588818 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.665477 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-dns-svc\") pod \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.665674 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-494q2\" (UniqueName: \"kubernetes.io/projected/93c80b8a-f7fd-4877-a7de-aeaf68575dde-kube-api-access-494q2\") pod \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.665858 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-sb\") pod \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.665899 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-config\") pod \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.665921 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-nb\") pod \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\" (UID: \"93c80b8a-f7fd-4877-a7de-aeaf68575dde\") " Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.682932 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c80b8a-f7fd-4877-a7de-aeaf68575dde-kube-api-access-494q2" (OuterVolumeSpecName: "kube-api-access-494q2") pod "93c80b8a-f7fd-4877-a7de-aeaf68575dde" (UID: "93c80b8a-f7fd-4877-a7de-aeaf68575dde"). InnerVolumeSpecName "kube-api-access-494q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.738420 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93c80b8a-f7fd-4877-a7de-aeaf68575dde" (UID: "93c80b8a-f7fd-4877-a7de-aeaf68575dde"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.767108 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-config" (OuterVolumeSpecName: "config") pod "93c80b8a-f7fd-4877-a7de-aeaf68575dde" (UID: "93c80b8a-f7fd-4877-a7de-aeaf68575dde"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.769222 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.769245 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.769254 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-494q2\" (UniqueName: \"kubernetes.io/projected/93c80b8a-f7fd-4877-a7de-aeaf68575dde-kube-api-access-494q2\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.772273 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93c80b8a-f7fd-4877-a7de-aeaf68575dde" (UID: "93c80b8a-f7fd-4877-a7de-aeaf68575dde"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.801482 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93c80b8a-f7fd-4877-a7de-aeaf68575dde" (UID: "93c80b8a-f7fd-4877-a7de-aeaf68575dde"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.871367 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.871989 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93c80b8a-f7fd-4877-a7de-aeaf68575dde-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:38 crc kubenswrapper[4727]: W1121 20:25:38.930120 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03533ce4_f69e_4a18_8b64_754b2ed7f789.slice/crio-5690b1528f414fa3630e617530a3081e873080b52d4add019205cf8c5ac1cedd WatchSource:0}: Error finding container 5690b1528f414fa3630e617530a3081e873080b52d4add019205cf8c5ac1cedd: Status 404 returned error can't find the container with id 5690b1528f414fa3630e617530a3081e873080b52d4add019205cf8c5ac1cedd Nov 21 20:25:38 crc kubenswrapper[4727]: I1121 20:25:38.931883 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kg8vm"] Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.101453 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-drvbw"] Nov 21 20:25:39 crc kubenswrapper[4727]: W1121 20:25:39.108465 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce9ffea_4675_4ed2_9b15_8a584708e173.slice/crio-f0db0171f4ba778f22bbc079e1c0a077402d327da26774d13faa358acd781ed8 WatchSource:0}: Error finding container f0db0171f4ba778f22bbc079e1c0a077402d327da26774d13faa358acd781ed8: Status 404 returned error can't find the container with id f0db0171f4ba778f22bbc079e1c0a077402d327da26774d13faa358acd781ed8 Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.179685 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 21 20:25:39 crc kubenswrapper[4727]: E1121 20:25:39.180344 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c80b8a-f7fd-4877-a7de-aeaf68575dde" containerName="dnsmasq-dns" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.180387 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c80b8a-f7fd-4877-a7de-aeaf68575dde" containerName="dnsmasq-dns" Nov 21 20:25:39 crc kubenswrapper[4727]: E1121 20:25:39.180409 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c80b8a-f7fd-4877-a7de-aeaf68575dde" containerName="init" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.180417 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c80b8a-f7fd-4877-a7de-aeaf68575dde" containerName="init" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.180793 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c80b8a-f7fd-4877-a7de-aeaf68575dde" containerName="dnsmasq-dns" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.192326 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.193428 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.196371 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.196383 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.196491 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.196536 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-b8v2p" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.236615 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c5eb-account-create-94gh4"] Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.282182 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.282453 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/29faf340-95f4-4bd3-bd87-f2e971a0e494-lock\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.282499 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sckfv\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-kube-api-access-sckfv\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.282524 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/29faf340-95f4-4bd3-bd87-f2e971a0e494-cache\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.282667 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.384043 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.384139 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.384176 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/29faf340-95f4-4bd3-bd87-f2e971a0e494-lock\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.384218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sckfv\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-kube-api-access-sckfv\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.384241 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/29faf340-95f4-4bd3-bd87-f2e971a0e494-cache\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.384717 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/29faf340-95f4-4bd3-bd87-f2e971a0e494-cache\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.384721 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/29faf340-95f4-4bd3-bd87-f2e971a0e494-lock\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.384747 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: E1121 20:25:39.384817 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 20:25:39 crc kubenswrapper[4727]: E1121 20:25:39.384837 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 20:25:39 crc kubenswrapper[4727]: E1121 20:25:39.384899 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift podName:29faf340-95f4-4bd3-bd87-f2e971a0e494 nodeName:}" failed. No retries permitted until 2025-11-21 20:25:39.884884773 +0000 UTC m=+1145.071069817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift") pod "swift-storage-0" (UID: "29faf340-95f4-4bd3-bd87-f2e971a0e494") : configmap "swift-ring-files" not found Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.404984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" event={"ID":"7c9240ef-7d44-4563-a271-6a540b902f9b","Type":"ContainerStarted","Data":"b29aa67b5f23e323c0f5936569afbb768f5d67741461f4d4addac2265051eed7"} Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.405251 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sckfv\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-kube-api-access-sckfv\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.407256 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" event={"ID":"03533ce4-f69e-4a18-8b64-754b2ed7f789","Type":"ContainerStarted","Data":"5690b1528f414fa3630e617530a3081e873080b52d4add019205cf8c5ac1cedd"} Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.410151 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-drvbw" event={"ID":"2ce9ffea-4675-4ed2-9b15-8a584708e173","Type":"ContainerStarted","Data":"f0db0171f4ba778f22bbc079e1c0a077402d327da26774d13faa358acd781ed8"} Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.412508 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bhnxh" event={"ID":"93c80b8a-f7fd-4877-a7de-aeaf68575dde","Type":"ContainerDied","Data":"8f57da04cca2c9e19aea8d26154aa9431d7c395a081514c88ba80ec4fd4d5f1c"} Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.412556 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bhnxh" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.412567 4727 scope.go:117] "RemoveContainer" containerID="3d888b49ca276c7340d6823a156dd93d76ef10256e9c4a551829a9325712add9" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.416805 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.457023 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bhnxh"] Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.464077 4727 scope.go:117] "RemoveContainer" containerID="1f2068061b5d263cc2e641c0a12f5a56d57761a71ec8c362ade7ed7ee5c4fede" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.465822 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bhnxh"] Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.525745 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c80b8a-f7fd-4877-a7de-aeaf68575dde" path="/var/lib/kubelet/pods/93c80b8a-f7fd-4877-a7de-aeaf68575dde/volumes" Nov 21 20:25:39 crc kubenswrapper[4727]: I1121 20:25:39.895238 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:39 crc kubenswrapper[4727]: E1121 20:25:39.895448 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 20:25:39 crc kubenswrapper[4727]: E1121 20:25:39.895483 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 20:25:39 crc kubenswrapper[4727]: E1121 20:25:39.895575 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift podName:29faf340-95f4-4bd3-bd87-f2e971a0e494 nodeName:}" failed. No retries permitted until 2025-11-21 20:25:40.895544471 +0000 UTC m=+1146.081729555 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift") pod "swift-storage-0" (UID: "29faf340-95f4-4bd3-bd87-f2e971a0e494") : configmap "swift-ring-files" not found Nov 21 20:25:40 crc kubenswrapper[4727]: I1121 20:25:40.428463 4727 generic.go:334] "Generic (PLEG): container finished" podID="7c9240ef-7d44-4563-a271-6a540b902f9b" containerID="6c090a3bf269c18126a4d238684515c66ea87e07d08f2df7d5485ee1bf6fc342" exitCode=0 Nov 21 20:25:40 crc kubenswrapper[4727]: I1121 20:25:40.428543 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" event={"ID":"7c9240ef-7d44-4563-a271-6a540b902f9b","Type":"ContainerDied","Data":"6c090a3bf269c18126a4d238684515c66ea87e07d08f2df7d5485ee1bf6fc342"} Nov 21 20:25:40 crc kubenswrapper[4727]: I1121 20:25:40.438091 4727 generic.go:334] "Generic (PLEG): container finished" podID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerID="a515b1751c90c0c2004c2cd74dc7c6abcc533f0a38fc3a43101c0a0aac29e16f" exitCode=0 Nov 21 20:25:40 crc kubenswrapper[4727]: I1121 20:25:40.438393 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" event={"ID":"03533ce4-f69e-4a18-8b64-754b2ed7f789","Type":"ContainerDied","Data":"a515b1751c90c0c2004c2cd74dc7c6abcc533f0a38fc3a43101c0a0aac29e16f"} Nov 21 20:25:40 crc kubenswrapper[4727]: I1121 20:25:40.452942 4727 generic.go:334] "Generic (PLEG): container finished" podID="2ce9ffea-4675-4ed2-9b15-8a584708e173" containerID="6a2900f22baa23f3ffecb622d0c79aa7628f8931967192960f4bfbed77378dce" exitCode=0 Nov 21 20:25:40 crc kubenswrapper[4727]: I1121 20:25:40.453097 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-drvbw" event={"ID":"2ce9ffea-4675-4ed2-9b15-8a584708e173","Type":"ContainerDied","Data":"6a2900f22baa23f3ffecb622d0c79aa7628f8931967192960f4bfbed77378dce"} Nov 21 20:25:40 crc kubenswrapper[4727]: I1121 20:25:40.913784 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:40 crc kubenswrapper[4727]: E1121 20:25:40.914008 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 20:25:40 crc kubenswrapper[4727]: E1121 20:25:40.914029 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 20:25:40 crc kubenswrapper[4727]: E1121 20:25:40.914087 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift podName:29faf340-95f4-4bd3-bd87-f2e971a0e494 nodeName:}" failed. No retries permitted until 2025-11-21 20:25:42.914070038 +0000 UTC m=+1148.100255082 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift") pod "swift-storage-0" (UID: "29faf340-95f4-4bd3-bd87-f2e971a0e494") : configmap "swift-ring-files" not found Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.144460 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-spj24"] Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.149765 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-spj24" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.154579 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-spj24"] Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.221737 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx8v2\" (UniqueName: \"kubernetes.io/projected/be3adfa2-622c-432c-b9e7-7530926b2ec4-kube-api-access-vx8v2\") pod \"glance-db-create-spj24\" (UID: \"be3adfa2-622c-432c-b9e7-7530926b2ec4\") " pod="openstack/glance-db-create-spj24" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.221839 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be3adfa2-622c-432c-b9e7-7530926b2ec4-operator-scripts\") pod \"glance-db-create-spj24\" (UID: \"be3adfa2-622c-432c-b9e7-7530926b2ec4\") " pod="openstack/glance-db-create-spj24" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.254520 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8ec8-account-create-26qv7"] Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.256497 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ec8-account-create-26qv7" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.262707 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.266911 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8ec8-account-create-26qv7"] Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.323585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx8v2\" (UniqueName: \"kubernetes.io/projected/be3adfa2-622c-432c-b9e7-7530926b2ec4-kube-api-access-vx8v2\") pod \"glance-db-create-spj24\" (UID: \"be3adfa2-622c-432c-b9e7-7530926b2ec4\") " pod="openstack/glance-db-create-spj24" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.323661 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be3adfa2-622c-432c-b9e7-7530926b2ec4-operator-scripts\") pod \"glance-db-create-spj24\" (UID: \"be3adfa2-622c-432c-b9e7-7530926b2ec4\") " pod="openstack/glance-db-create-spj24" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.323685 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glpcp\" (UniqueName: \"kubernetes.io/projected/79b69708-cf69-43e5-9299-bbe3fb5b72f4-kube-api-access-glpcp\") pod \"glance-8ec8-account-create-26qv7\" (UID: \"79b69708-cf69-43e5-9299-bbe3fb5b72f4\") " pod="openstack/glance-8ec8-account-create-26qv7" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.323772 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b69708-cf69-43e5-9299-bbe3fb5b72f4-operator-scripts\") pod \"glance-8ec8-account-create-26qv7\" (UID: \"79b69708-cf69-43e5-9299-bbe3fb5b72f4\") " pod="openstack/glance-8ec8-account-create-26qv7" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.324717 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be3adfa2-622c-432c-b9e7-7530926b2ec4-operator-scripts\") pod \"glance-db-create-spj24\" (UID: \"be3adfa2-622c-432c-b9e7-7530926b2ec4\") " pod="openstack/glance-db-create-spj24" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.345615 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx8v2\" (UniqueName: \"kubernetes.io/projected/be3adfa2-622c-432c-b9e7-7530926b2ec4-kube-api-access-vx8v2\") pod \"glance-db-create-spj24\" (UID: \"be3adfa2-622c-432c-b9e7-7530926b2ec4\") " pod="openstack/glance-db-create-spj24" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.428261 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glpcp\" (UniqueName: \"kubernetes.io/projected/79b69708-cf69-43e5-9299-bbe3fb5b72f4-kube-api-access-glpcp\") pod \"glance-8ec8-account-create-26qv7\" (UID: \"79b69708-cf69-43e5-9299-bbe3fb5b72f4\") " pod="openstack/glance-8ec8-account-create-26qv7" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.428398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b69708-cf69-43e5-9299-bbe3fb5b72f4-operator-scripts\") pod \"glance-8ec8-account-create-26qv7\" (UID: \"79b69708-cf69-43e5-9299-bbe3fb5b72f4\") " pod="openstack/glance-8ec8-account-create-26qv7" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.429985 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b69708-cf69-43e5-9299-bbe3fb5b72f4-operator-scripts\") pod \"glance-8ec8-account-create-26qv7\" (UID: \"79b69708-cf69-43e5-9299-bbe3fb5b72f4\") " pod="openstack/glance-8ec8-account-create-26qv7" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.448632 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glpcp\" (UniqueName: \"kubernetes.io/projected/79b69708-cf69-43e5-9299-bbe3fb5b72f4-kube-api-access-glpcp\") pod \"glance-8ec8-account-create-26qv7\" (UID: \"79b69708-cf69-43e5-9299-bbe3fb5b72f4\") " pod="openstack/glance-8ec8-account-create-26qv7" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.469835 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-spj24" Nov 21 20:25:41 crc kubenswrapper[4727]: I1121 20:25:41.588083 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ec8-account-create-26qv7" Nov 21 20:25:42 crc kubenswrapper[4727]: I1121 20:25:42.974703 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:42 crc kubenswrapper[4727]: E1121 20:25:42.975170 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 20:25:42 crc kubenswrapper[4727]: E1121 20:25:42.975383 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 20:25:42 crc kubenswrapper[4727]: E1121 20:25:42.975441 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift podName:29faf340-95f4-4bd3-bd87-f2e971a0e494 nodeName:}" failed. No retries permitted until 2025-11-21 20:25:46.975423401 +0000 UTC m=+1152.161608445 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift") pod "swift-storage-0" (UID: "29faf340-95f4-4bd3-bd87-f2e971a0e494") : configmap "swift-ring-files" not found Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.046288 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.059721 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vrqcf"] Nov 21 20:25:43 crc kubenswrapper[4727]: E1121 20:25:43.061387 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9240ef-7d44-4563-a271-6a540b902f9b" containerName="mariadb-account-create" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.061411 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9240ef-7d44-4563-a271-6a540b902f9b" containerName="mariadb-account-create" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.061599 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-drvbw" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.061778 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9240ef-7d44-4563-a271-6a540b902f9b" containerName="mariadb-account-create" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.062706 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.065679 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.065836 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.065940 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.071819 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vrqcf"] Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.178811 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c9240ef-7d44-4563-a271-6a540b902f9b-operator-scripts\") pod \"7c9240ef-7d44-4563-a271-6a540b902f9b\" (UID: \"7c9240ef-7d44-4563-a271-6a540b902f9b\") " Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.178877 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mxpq\" (UniqueName: \"kubernetes.io/projected/2ce9ffea-4675-4ed2-9b15-8a584708e173-kube-api-access-5mxpq\") pod \"2ce9ffea-4675-4ed2-9b15-8a584708e173\" (UID: \"2ce9ffea-4675-4ed2-9b15-8a584708e173\") " Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.179008 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-527jj\" (UniqueName: \"kubernetes.io/projected/7c9240ef-7d44-4563-a271-6a540b902f9b-kube-api-access-527jj\") pod \"7c9240ef-7d44-4563-a271-6a540b902f9b\" (UID: \"7c9240ef-7d44-4563-a271-6a540b902f9b\") " Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.179136 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce9ffea-4675-4ed2-9b15-8a584708e173-operator-scripts\") pod \"2ce9ffea-4675-4ed2-9b15-8a584708e173\" (UID: \"2ce9ffea-4675-4ed2-9b15-8a584708e173\") " Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.179471 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c9240ef-7d44-4563-a271-6a540b902f9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c9240ef-7d44-4563-a271-6a540b902f9b" (UID: "7c9240ef-7d44-4563-a271-6a540b902f9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.179577 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7c26\" (UniqueName: \"kubernetes.io/projected/bf989668-3fb7-491d-abf0-e2991e327690-kube-api-access-v7c26\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.179643 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-combined-ca-bundle\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.179802 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-scripts\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.179852 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf989668-3fb7-491d-abf0-e2991e327690-etc-swift\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.179891 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-ring-data-devices\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.179949 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-swiftconf\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.180008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-dispersionconf\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.180034 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce9ffea-4675-4ed2-9b15-8a584708e173-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ce9ffea-4675-4ed2-9b15-8a584708e173" (UID: "2ce9ffea-4675-4ed2-9b15-8a584708e173"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.180165 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c9240ef-7d44-4563-a271-6a540b902f9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.188205 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c9240ef-7d44-4563-a271-6a540b902f9b-kube-api-access-527jj" (OuterVolumeSpecName: "kube-api-access-527jj") pod "7c9240ef-7d44-4563-a271-6a540b902f9b" (UID: "7c9240ef-7d44-4563-a271-6a540b902f9b"). InnerVolumeSpecName "kube-api-access-527jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.195372 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce9ffea-4675-4ed2-9b15-8a584708e173-kube-api-access-5mxpq" (OuterVolumeSpecName: "kube-api-access-5mxpq") pod "2ce9ffea-4675-4ed2-9b15-8a584708e173" (UID: "2ce9ffea-4675-4ed2-9b15-8a584708e173"). InnerVolumeSpecName "kube-api-access-5mxpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.282247 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7c26\" (UniqueName: \"kubernetes.io/projected/bf989668-3fb7-491d-abf0-e2991e327690-kube-api-access-v7c26\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.282314 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-combined-ca-bundle\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.282422 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-scripts\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.282459 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf989668-3fb7-491d-abf0-e2991e327690-etc-swift\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.282504 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-ring-data-devices\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.282570 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-swiftconf\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.282591 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-dispersionconf\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.282676 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mxpq\" (UniqueName: \"kubernetes.io/projected/2ce9ffea-4675-4ed2-9b15-8a584708e173-kube-api-access-5mxpq\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.282687 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-527jj\" (UniqueName: \"kubernetes.io/projected/7c9240ef-7d44-4563-a271-6a540b902f9b-kube-api-access-527jj\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.282697 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce9ffea-4675-4ed2-9b15-8a584708e173-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.283024 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf989668-3fb7-491d-abf0-e2991e327690-etc-swift\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.283541 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-ring-data-devices\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.283762 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-scripts\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.285696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-dispersionconf\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.285842 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-swiftconf\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.286665 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-combined-ca-bundle\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.301045 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7c26\" (UniqueName: \"kubernetes.io/projected/bf989668-3fb7-491d-abf0-e2991e327690-kube-api-access-v7c26\") pod \"swift-ring-rebalance-vrqcf\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.326906 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-spj24"] Nov 21 20:25:43 crc kubenswrapper[4727]: W1121 20:25:43.330029 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe3adfa2_622c_432c_b9e7_7530926b2ec4.slice/crio-0b94209f18beb157d6f4f9dbd1b778d4856264a0b62dd95ea36f8156623b165a WatchSource:0}: Error finding container 0b94209f18beb157d6f4f9dbd1b778d4856264a0b62dd95ea36f8156623b165a: Status 404 returned error can't find the container with id 0b94209f18beb157d6f4f9dbd1b778d4856264a0b62dd95ea36f8156623b165a Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.335430 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.335506 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.350890 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8ec8-account-create-26qv7"] Nov 21 20:25:43 crc kubenswrapper[4727]: W1121 20:25:43.363299 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79b69708_cf69_43e5_9299_bbe3fb5b72f4.slice/crio-548eea13c1d8ab641e71b585b4ff40dc34a1defe0acfdce54b8e96c79718887d WatchSource:0}: Error finding container 548eea13c1d8ab641e71b585b4ff40dc34a1defe0acfdce54b8e96c79718887d: Status 404 returned error can't find the container with id 548eea13c1d8ab641e71b585b4ff40dc34a1defe0acfdce54b8e96c79718887d Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.403895 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.485706 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" event={"ID":"03533ce4-f69e-4a18-8b64-754b2ed7f789","Type":"ContainerStarted","Data":"fae0c0ac521e31710d16630121b77ae9ac4f430333fabdb8ac71af8ca90d13f3"} Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.486083 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.487314 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-drvbw" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.487344 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-drvbw" event={"ID":"2ce9ffea-4675-4ed2-9b15-8a584708e173","Type":"ContainerDied","Data":"f0db0171f4ba778f22bbc079e1c0a077402d327da26774d13faa358acd781ed8"} Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.487374 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0db0171f4ba778f22bbc079e1c0a077402d327da26774d13faa358acd781ed8" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.490671 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8ec8-account-create-26qv7" event={"ID":"79b69708-cf69-43e5-9299-bbe3fb5b72f4","Type":"ContainerStarted","Data":"548eea13c1d8ab641e71b585b4ff40dc34a1defe0acfdce54b8e96c79718887d"} Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.492950 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5a7c7dad-b024-4e09-b455-662514be19f2","Type":"ContainerStarted","Data":"282045bc3e701d3d33f8e3229e2c6828ddc9877abf750efa7398a0a1b99c73d3"} Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.493257 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.511924 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" podStartSLOduration=6.511904787 podStartE2EDuration="6.511904787s" podCreationTimestamp="2025-11-21 20:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:25:43.508454471 +0000 UTC m=+1148.694639515" watchObservedRunningTime="2025-11-21 20:25:43.511904787 +0000 UTC m=+1148.698089831" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.521557 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.540944 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.059465622 podStartE2EDuration="46.540923721s" podCreationTimestamp="2025-11-21 20:24:57 +0000 UTC" firstStartedPulling="2025-11-21 20:25:09.391435135 +0000 UTC m=+1114.577620169" lastFinishedPulling="2025-11-21 20:25:42.872893234 +0000 UTC m=+1148.059078268" observedRunningTime="2025-11-21 20:25:43.521197305 +0000 UTC m=+1148.707382349" watchObservedRunningTime="2025-11-21 20:25:43.540923721 +0000 UTC m=+1148.727108765" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.548531 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-spj24" podStartSLOduration=2.548516099 podStartE2EDuration="2.548516099s" podCreationTimestamp="2025-11-21 20:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:25:43.540305827 +0000 UTC m=+1148.726490871" watchObservedRunningTime="2025-11-21 20:25:43.548516099 +0000 UTC m=+1148.734701143" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.555082 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-spj24" event={"ID":"be3adfa2-622c-432c-b9e7-7530926b2ec4","Type":"ContainerStarted","Data":"0b94209f18beb157d6f4f9dbd1b778d4856264a0b62dd95ea36f8156623b165a"} Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.555125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9","Type":"ContainerStarted","Data":"35535ba58a5198d1fa8adf03965f14564b087d25988ac8bdcdb7845859f4cc36"} Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.555137 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c5eb-account-create-94gh4" event={"ID":"7c9240ef-7d44-4563-a271-6a540b902f9b","Type":"ContainerDied","Data":"b29aa67b5f23e323c0f5936569afbb768f5d67741461f4d4addac2265051eed7"} Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.555413 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b29aa67b5f23e323c0f5936569afbb768f5d67741461f4d4addac2265051eed7" Nov 21 20:25:43 crc kubenswrapper[4727]: I1121 20:25:43.919384 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vrqcf"] Nov 21 20:25:43 crc kubenswrapper[4727]: W1121 20:25:43.919453 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf989668_3fb7_491d_abf0_e2991e327690.slice/crio-8af7e10c566f667abb81b849180834b5392de01d4a9cb932cda761b099ec2571 WatchSource:0}: Error finding container 8af7e10c566f667abb81b849180834b5392de01d4a9cb932cda761b099ec2571: Status 404 returned error can't find the container with id 8af7e10c566f667abb81b849180834b5392de01d4a9cb932cda761b099ec2571 Nov 21 20:25:44 crc kubenswrapper[4727]: I1121 20:25:44.536010 4727 generic.go:334] "Generic (PLEG): container finished" podID="79b69708-cf69-43e5-9299-bbe3fb5b72f4" containerID="5c8c3dc4d59053173950e7b38bab1c8e5215124d824584e09975bf28437d8869" exitCode=0 Nov 21 20:25:44 crc kubenswrapper[4727]: I1121 20:25:44.536153 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8ec8-account-create-26qv7" event={"ID":"79b69708-cf69-43e5-9299-bbe3fb5b72f4","Type":"ContainerDied","Data":"5c8c3dc4d59053173950e7b38bab1c8e5215124d824584e09975bf28437d8869"} Nov 21 20:25:44 crc kubenswrapper[4727]: I1121 20:25:44.538015 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vrqcf" event={"ID":"bf989668-3fb7-491d-abf0-e2991e327690","Type":"ContainerStarted","Data":"8af7e10c566f667abb81b849180834b5392de01d4a9cb932cda761b099ec2571"} Nov 21 20:25:44 crc kubenswrapper[4727]: I1121 20:25:44.540874 4727 generic.go:334] "Generic (PLEG): container finished" podID="be3adfa2-622c-432c-b9e7-7530926b2ec4" containerID="79faaabeadf7ac09ea73c9e84dcaed2da66b92e28754d85d566fd0ae1d59ff90" exitCode=0 Nov 21 20:25:44 crc kubenswrapper[4727]: I1121 20:25:44.540937 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-spj24" event={"ID":"be3adfa2-622c-432c-b9e7-7530926b2ec4","Type":"ContainerDied","Data":"79faaabeadf7ac09ea73c9e84dcaed2da66b92e28754d85d566fd0ae1d59ff90"} Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.158467 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-777bc7647-cdqwj" podUID="4d5b5d37-a53b-4d18-ac63-f53e823dab2c" containerName="console" containerID="cri-o://f9d24eaa330e498f4977caf8a713f76ce84d446343a6b7a1e0b61e7c651d57db" gracePeriod=15 Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.568528 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-777bc7647-cdqwj_4d5b5d37-a53b-4d18-ac63-f53e823dab2c/console/0.log" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.568598 4727 generic.go:334] "Generic (PLEG): container finished" podID="4d5b5d37-a53b-4d18-ac63-f53e823dab2c" containerID="f9d24eaa330e498f4977caf8a713f76ce84d446343a6b7a1e0b61e7c651d57db" exitCode=2 Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.568815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-777bc7647-cdqwj" event={"ID":"4d5b5d37-a53b-4d18-ac63-f53e823dab2c","Type":"ContainerDied","Data":"f9d24eaa330e498f4977caf8a713f76ce84d446343a6b7a1e0b61e7c651d57db"} Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.581872 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fbkmd"] Nov 21 20:25:45 crc kubenswrapper[4727]: E1121 20:25:45.582611 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce9ffea-4675-4ed2-9b15-8a584708e173" containerName="mariadb-database-create" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.582634 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce9ffea-4675-4ed2-9b15-8a584708e173" containerName="mariadb-database-create" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.582875 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce9ffea-4675-4ed2-9b15-8a584708e173" containerName="mariadb-database-create" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.584266 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fbkmd" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.601670 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fbkmd"] Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.700016 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8b1a-account-create-gf7rm"] Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.701606 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b1a-account-create-gf7rm" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.703825 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.709982 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8b1a-account-create-gf7rm"] Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.735168 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3aca9be-2c44-48aa-b561-ad796cee0014-operator-scripts\") pod \"keystone-db-create-fbkmd\" (UID: \"a3aca9be-2c44-48aa-b561-ad796cee0014\") " pod="openstack/keystone-db-create-fbkmd" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.735308 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4xw\" (UniqueName: \"kubernetes.io/projected/a3aca9be-2c44-48aa-b561-ad796cee0014-kube-api-access-wk4xw\") pod \"keystone-db-create-fbkmd\" (UID: \"a3aca9be-2c44-48aa-b561-ad796cee0014\") " pod="openstack/keystone-db-create-fbkmd" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.836683 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lq26\" (UniqueName: \"kubernetes.io/projected/810aff49-a935-4a69-a3ad-2c26ab66ead0-kube-api-access-9lq26\") pod \"keystone-8b1a-account-create-gf7rm\" (UID: \"810aff49-a935-4a69-a3ad-2c26ab66ead0\") " pod="openstack/keystone-8b1a-account-create-gf7rm" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.836753 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4xw\" (UniqueName: \"kubernetes.io/projected/a3aca9be-2c44-48aa-b561-ad796cee0014-kube-api-access-wk4xw\") pod \"keystone-db-create-fbkmd\" (UID: \"a3aca9be-2c44-48aa-b561-ad796cee0014\") " pod="openstack/keystone-db-create-fbkmd" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.836889 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3aca9be-2c44-48aa-b561-ad796cee0014-operator-scripts\") pod \"keystone-db-create-fbkmd\" (UID: \"a3aca9be-2c44-48aa-b561-ad796cee0014\") " pod="openstack/keystone-db-create-fbkmd" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.837015 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/810aff49-a935-4a69-a3ad-2c26ab66ead0-operator-scripts\") pod \"keystone-8b1a-account-create-gf7rm\" (UID: \"810aff49-a935-4a69-a3ad-2c26ab66ead0\") " pod="openstack/keystone-8b1a-account-create-gf7rm" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.838125 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3aca9be-2c44-48aa-b561-ad796cee0014-operator-scripts\") pod \"keystone-db-create-fbkmd\" (UID: \"a3aca9be-2c44-48aa-b561-ad796cee0014\") " pod="openstack/keystone-db-create-fbkmd" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.868529 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4xw\" (UniqueName: \"kubernetes.io/projected/a3aca9be-2c44-48aa-b561-ad796cee0014-kube-api-access-wk4xw\") pod \"keystone-db-create-fbkmd\" (UID: \"a3aca9be-2c44-48aa-b561-ad796cee0014\") " pod="openstack/keystone-db-create-fbkmd" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.912853 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fbkmd" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.939299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/810aff49-a935-4a69-a3ad-2c26ab66ead0-operator-scripts\") pod \"keystone-8b1a-account-create-gf7rm\" (UID: \"810aff49-a935-4a69-a3ad-2c26ab66ead0\") " pod="openstack/keystone-8b1a-account-create-gf7rm" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.939371 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lq26\" (UniqueName: \"kubernetes.io/projected/810aff49-a935-4a69-a3ad-2c26ab66ead0-kube-api-access-9lq26\") pod \"keystone-8b1a-account-create-gf7rm\" (UID: \"810aff49-a935-4a69-a3ad-2c26ab66ead0\") " pod="openstack/keystone-8b1a-account-create-gf7rm" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.940451 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/810aff49-a935-4a69-a3ad-2c26ab66ead0-operator-scripts\") pod \"keystone-8b1a-account-create-gf7rm\" (UID: \"810aff49-a935-4a69-a3ad-2c26ab66ead0\") " pod="openstack/keystone-8b1a-account-create-gf7rm" Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.948177 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-cm9v5"] Nov 21 20:25:45 crc kubenswrapper[4727]: I1121 20:25:45.949518 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cm9v5" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.026250 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cm9v5"] Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.059285 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lq26\" (UniqueName: \"kubernetes.io/projected/810aff49-a935-4a69-a3ad-2c26ab66ead0-kube-api-access-9lq26\") pod \"keystone-8b1a-account-create-gf7rm\" (UID: \"810aff49-a935-4a69-a3ad-2c26ab66ead0\") " pod="openstack/keystone-8b1a-account-create-gf7rm" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.075363 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/74333548-19eb-4237-8b46-2fb41cc613ae-kube-api-access-c94f4\") pod \"placement-db-create-cm9v5\" (UID: \"74333548-19eb-4237-8b46-2fb41cc613ae\") " pod="openstack/placement-db-create-cm9v5" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.075815 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74333548-19eb-4237-8b46-2fb41cc613ae-operator-scripts\") pod \"placement-db-create-cm9v5\" (UID: \"74333548-19eb-4237-8b46-2fb41cc613ae\") " pod="openstack/placement-db-create-cm9v5" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.100102 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ec7d-account-create-lzk75"] Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.101313 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec7d-account-create-lzk75" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.103622 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.105561 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ec7d-account-create-lzk75"] Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.177274 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/74333548-19eb-4237-8b46-2fb41cc613ae-kube-api-access-c94f4\") pod \"placement-db-create-cm9v5\" (UID: \"74333548-19eb-4237-8b46-2fb41cc613ae\") " pod="openstack/placement-db-create-cm9v5" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.177414 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74333548-19eb-4237-8b46-2fb41cc613ae-operator-scripts\") pod \"placement-db-create-cm9v5\" (UID: \"74333548-19eb-4237-8b46-2fb41cc613ae\") " pod="openstack/placement-db-create-cm9v5" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.178232 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74333548-19eb-4237-8b46-2fb41cc613ae-operator-scripts\") pod \"placement-db-create-cm9v5\" (UID: \"74333548-19eb-4237-8b46-2fb41cc613ae\") " pod="openstack/placement-db-create-cm9v5" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.199566 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/74333548-19eb-4237-8b46-2fb41cc613ae-kube-api-access-c94f4\") pod \"placement-db-create-cm9v5\" (UID: \"74333548-19eb-4237-8b46-2fb41cc613ae\") " pod="openstack/placement-db-create-cm9v5" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.280553 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzmzf\" (UniqueName: \"kubernetes.io/projected/12ed8782-7884-43ef-98dc-7889dc0c4429-kube-api-access-zzmzf\") pod \"placement-ec7d-account-create-lzk75\" (UID: \"12ed8782-7884-43ef-98dc-7889dc0c4429\") " pod="openstack/placement-ec7d-account-create-lzk75" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.280630 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ed8782-7884-43ef-98dc-7889dc0c4429-operator-scripts\") pod \"placement-ec7d-account-create-lzk75\" (UID: \"12ed8782-7884-43ef-98dc-7889dc0c4429\") " pod="openstack/placement-ec7d-account-create-lzk75" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.317099 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b1a-account-create-gf7rm" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.382416 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzmzf\" (UniqueName: \"kubernetes.io/projected/12ed8782-7884-43ef-98dc-7889dc0c4429-kube-api-access-zzmzf\") pod \"placement-ec7d-account-create-lzk75\" (UID: \"12ed8782-7884-43ef-98dc-7889dc0c4429\") " pod="openstack/placement-ec7d-account-create-lzk75" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.382475 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ed8782-7884-43ef-98dc-7889dc0c4429-operator-scripts\") pod \"placement-ec7d-account-create-lzk75\" (UID: \"12ed8782-7884-43ef-98dc-7889dc0c4429\") " pod="openstack/placement-ec7d-account-create-lzk75" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.383537 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ed8782-7884-43ef-98dc-7889dc0c4429-operator-scripts\") pod \"placement-ec7d-account-create-lzk75\" (UID: \"12ed8782-7884-43ef-98dc-7889dc0c4429\") " pod="openstack/placement-ec7d-account-create-lzk75" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.393571 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cm9v5" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.402153 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzmzf\" (UniqueName: \"kubernetes.io/projected/12ed8782-7884-43ef-98dc-7889dc0c4429-kube-api-access-zzmzf\") pod \"placement-ec7d-account-create-lzk75\" (UID: \"12ed8782-7884-43ef-98dc-7889dc0c4429\") " pod="openstack/placement-ec7d-account-create-lzk75" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.423606 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec7d-account-create-lzk75" Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.580006 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9","Type":"ContainerStarted","Data":"4700fc126259d977a4e2bff9636aae21565da7a6c14974ec5ad1f684fbb10b2b"} Nov 21 20:25:46 crc kubenswrapper[4727]: I1121 20:25:46.995072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:46 crc kubenswrapper[4727]: E1121 20:25:46.995324 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 20:25:46 crc kubenswrapper[4727]: E1121 20:25:46.995370 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 20:25:46 crc kubenswrapper[4727]: E1121 20:25:46.995517 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift podName:29faf340-95f4-4bd3-bd87-f2e971a0e494 nodeName:}" failed. No retries permitted until 2025-11-21 20:25:54.995479699 +0000 UTC m=+1160.181664783 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift") pod "swift-storage-0" (UID: "29faf340-95f4-4bd3-bd87-f2e971a0e494") : configmap "swift-ring-files" not found Nov 21 20:25:47 crc kubenswrapper[4727]: I1121 20:25:47.861680 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.335288 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7"] Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.336860 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.344157 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7"] Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.362123 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.416516 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs87m"] Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.416742 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" podUID="14bdeda2-a618-4f47-ae48-87a7c611401e" containerName="dnsmasq-dns" containerID="cri-o://ae9aac57a0af8f07a45f31f441084e53370dbc27fee31ad29728a72cb262d25b" gracePeriod=10 Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.533260 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrng9\" (UniqueName: \"kubernetes.io/projected/44911d7b-3990-408a-9143-c4735c2e2b0b-kube-api-access-lrng9\") pod \"mysqld-exporter-openstack-cell1-db-create-q7bm7\" (UID: \"44911d7b-3990-408a-9143-c4735c2e2b0b\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.534193 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44911d7b-3990-408a-9143-c4735c2e2b0b-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-q7bm7\" (UID: \"44911d7b-3990-408a-9143-c4735c2e2b0b\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.581817 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-8738-account-create-8br5t"] Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.587212 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8738-account-create-8br5t" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.591523 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.607714 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-8738-account-create-8br5t"] Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.640387 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0343619-7950-468b-8f82-146e68397de5-operator-scripts\") pod \"mysqld-exporter-8738-account-create-8br5t\" (UID: \"d0343619-7950-468b-8f82-146e68397de5\") " pod="openstack/mysqld-exporter-8738-account-create-8br5t" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.640445 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44911d7b-3990-408a-9143-c4735c2e2b0b-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-q7bm7\" (UID: \"44911d7b-3990-408a-9143-c4735c2e2b0b\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.640529 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrng9\" (UniqueName: \"kubernetes.io/projected/44911d7b-3990-408a-9143-c4735c2e2b0b-kube-api-access-lrng9\") pod \"mysqld-exporter-openstack-cell1-db-create-q7bm7\" (UID: \"44911d7b-3990-408a-9143-c4735c2e2b0b\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.640595 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ghdb\" (UniqueName: \"kubernetes.io/projected/d0343619-7950-468b-8f82-146e68397de5-kube-api-access-8ghdb\") pod \"mysqld-exporter-8738-account-create-8br5t\" (UID: \"d0343619-7950-468b-8f82-146e68397de5\") " pod="openstack/mysqld-exporter-8738-account-create-8br5t" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.641450 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44911d7b-3990-408a-9143-c4735c2e2b0b-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-q7bm7\" (UID: \"44911d7b-3990-408a-9143-c4735c2e2b0b\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.665639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrng9\" (UniqueName: \"kubernetes.io/projected/44911d7b-3990-408a-9143-c4735c2e2b0b-kube-api-access-lrng9\") pod \"mysqld-exporter-openstack-cell1-db-create-q7bm7\" (UID: \"44911d7b-3990-408a-9143-c4735c2e2b0b\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.669760 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.747218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0343619-7950-468b-8f82-146e68397de5-operator-scripts\") pod \"mysqld-exporter-8738-account-create-8br5t\" (UID: \"d0343619-7950-468b-8f82-146e68397de5\") " pod="openstack/mysqld-exporter-8738-account-create-8br5t" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.747340 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ghdb\" (UniqueName: \"kubernetes.io/projected/d0343619-7950-468b-8f82-146e68397de5-kube-api-access-8ghdb\") pod \"mysqld-exporter-8738-account-create-8br5t\" (UID: \"d0343619-7950-468b-8f82-146e68397de5\") " pod="openstack/mysqld-exporter-8738-account-create-8br5t" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.748938 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0343619-7950-468b-8f82-146e68397de5-operator-scripts\") pod \"mysqld-exporter-8738-account-create-8br5t\" (UID: \"d0343619-7950-468b-8f82-146e68397de5\") " pod="openstack/mysqld-exporter-8738-account-create-8br5t" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.773699 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ghdb\" (UniqueName: \"kubernetes.io/projected/d0343619-7950-468b-8f82-146e68397de5-kube-api-access-8ghdb\") pod \"mysqld-exporter-8738-account-create-8br5t\" (UID: \"d0343619-7950-468b-8f82-146e68397de5\") " pod="openstack/mysqld-exporter-8738-account-create-8br5t" Nov 21 20:25:48 crc kubenswrapper[4727]: I1121 20:25:48.906624 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8738-account-create-8br5t" Nov 21 20:25:49 crc kubenswrapper[4727]: I1121 20:25:49.623403 4727 generic.go:334] "Generic (PLEG): container finished" podID="14bdeda2-a618-4f47-ae48-87a7c611401e" containerID="ae9aac57a0af8f07a45f31f441084e53370dbc27fee31ad29728a72cb262d25b" exitCode=0 Nov 21 20:25:49 crc kubenswrapper[4727]: I1121 20:25:49.623466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" event={"ID":"14bdeda2-a618-4f47-ae48-87a7c611401e","Type":"ContainerDied","Data":"ae9aac57a0af8f07a45f31f441084e53370dbc27fee31ad29728a72cb262d25b"} Nov 21 20:25:49 crc kubenswrapper[4727]: I1121 20:25:49.951451 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-spj24" Nov 21 20:25:49 crc kubenswrapper[4727]: I1121 20:25:49.980014 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be3adfa2-622c-432c-b9e7-7530926b2ec4-operator-scripts\") pod \"be3adfa2-622c-432c-b9e7-7530926b2ec4\" (UID: \"be3adfa2-622c-432c-b9e7-7530926b2ec4\") " Nov 21 20:25:49 crc kubenswrapper[4727]: I1121 20:25:49.980110 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx8v2\" (UniqueName: \"kubernetes.io/projected/be3adfa2-622c-432c-b9e7-7530926b2ec4-kube-api-access-vx8v2\") pod \"be3adfa2-622c-432c-b9e7-7530926b2ec4\" (UID: \"be3adfa2-622c-432c-b9e7-7530926b2ec4\") " Nov 21 20:25:49 crc kubenswrapper[4727]: I1121 20:25:49.982160 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be3adfa2-622c-432c-b9e7-7530926b2ec4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be3adfa2-622c-432c-b9e7-7530926b2ec4" (UID: "be3adfa2-622c-432c-b9e7-7530926b2ec4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:49 crc kubenswrapper[4727]: I1121 20:25:49.990124 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be3adfa2-622c-432c-b9e7-7530926b2ec4-kube-api-access-vx8v2" (OuterVolumeSpecName: "kube-api-access-vx8v2") pod "be3adfa2-622c-432c-b9e7-7530926b2ec4" (UID: "be3adfa2-622c-432c-b9e7-7530926b2ec4"). InnerVolumeSpecName "kube-api-access-vx8v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.051614 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ec8-account-create-26qv7" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.081740 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b69708-cf69-43e5-9299-bbe3fb5b72f4-operator-scripts\") pod \"79b69708-cf69-43e5-9299-bbe3fb5b72f4\" (UID: \"79b69708-cf69-43e5-9299-bbe3fb5b72f4\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.081836 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glpcp\" (UniqueName: \"kubernetes.io/projected/79b69708-cf69-43e5-9299-bbe3fb5b72f4-kube-api-access-glpcp\") pod \"79b69708-cf69-43e5-9299-bbe3fb5b72f4\" (UID: \"79b69708-cf69-43e5-9299-bbe3fb5b72f4\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.082387 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be3adfa2-622c-432c-b9e7-7530926b2ec4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.082399 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx8v2\" (UniqueName: \"kubernetes.io/projected/be3adfa2-622c-432c-b9e7-7530926b2ec4-kube-api-access-vx8v2\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.082399 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b69708-cf69-43e5-9299-bbe3fb5b72f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79b69708-cf69-43e5-9299-bbe3fb5b72f4" (UID: "79b69708-cf69-43e5-9299-bbe3fb5b72f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.107747 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b69708-cf69-43e5-9299-bbe3fb5b72f4-kube-api-access-glpcp" (OuterVolumeSpecName: "kube-api-access-glpcp") pod "79b69708-cf69-43e5-9299-bbe3fb5b72f4" (UID: "79b69708-cf69-43e5-9299-bbe3fb5b72f4"). InnerVolumeSpecName "kube-api-access-glpcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.184149 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b69708-cf69-43e5-9299-bbe3fb5b72f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.184180 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glpcp\" (UniqueName: \"kubernetes.io/projected/79b69708-cf69-43e5-9299-bbe3fb5b72f4-kube-api-access-glpcp\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.370812 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.391093 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2w97\" (UniqueName: \"kubernetes.io/projected/14bdeda2-a618-4f47-ae48-87a7c611401e-kube-api-access-g2w97\") pod \"14bdeda2-a618-4f47-ae48-87a7c611401e\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.391427 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-config\") pod \"14bdeda2-a618-4f47-ae48-87a7c611401e\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.391534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-dns-svc\") pod \"14bdeda2-a618-4f47-ae48-87a7c611401e\" (UID: \"14bdeda2-a618-4f47-ae48-87a7c611401e\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.421837 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14bdeda2-a618-4f47-ae48-87a7c611401e-kube-api-access-g2w97" (OuterVolumeSpecName: "kube-api-access-g2w97") pod "14bdeda2-a618-4f47-ae48-87a7c611401e" (UID: "14bdeda2-a618-4f47-ae48-87a7c611401e"). InnerVolumeSpecName "kube-api-access-g2w97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.494929 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2w97\" (UniqueName: \"kubernetes.io/projected/14bdeda2-a618-4f47-ae48-87a7c611401e-kube-api-access-g2w97\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.517235 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-777bc7647-cdqwj_4d5b5d37-a53b-4d18-ac63-f53e823dab2c/console/0.log" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.517312 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.542478 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "14bdeda2-a618-4f47-ae48-87a7c611401e" (UID: "14bdeda2-a618-4f47-ae48-87a7c611401e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.567210 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-config" (OuterVolumeSpecName: "config") pod "14bdeda2-a618-4f47-ae48-87a7c611401e" (UID: "14bdeda2-a618-4f47-ae48-87a7c611401e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.596064 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-serving-cert\") pod \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.596186 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-oauth-serving-cert\") pod \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.596444 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-trusted-ca-bundle\") pod \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.596524 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-config\") pod \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.596560 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-oauth-config\") pod \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.596602 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-service-ca\") pod \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.596687 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbqwf\" (UniqueName: \"kubernetes.io/projected/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-kube-api-access-hbqwf\") pod \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\" (UID: \"4d5b5d37-a53b-4d18-ac63-f53e823dab2c\") " Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.597037 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4d5b5d37-a53b-4d18-ac63-f53e823dab2c" (UID: "4d5b5d37-a53b-4d18-ac63-f53e823dab2c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.597379 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-config" (OuterVolumeSpecName: "console-config") pod "4d5b5d37-a53b-4d18-ac63-f53e823dab2c" (UID: "4d5b5d37-a53b-4d18-ac63-f53e823dab2c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.597483 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4d5b5d37-a53b-4d18-ac63-f53e823dab2c" (UID: "4d5b5d37-a53b-4d18-ac63-f53e823dab2c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.598010 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-service-ca" (OuterVolumeSpecName: "service-ca") pod "4d5b5d37-a53b-4d18-ac63-f53e823dab2c" (UID: "4d5b5d37-a53b-4d18-ac63-f53e823dab2c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.599682 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.599701 4727 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.599717 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14bdeda2-a618-4f47-ae48-87a7c611401e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.599728 4727 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.599740 4727 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.599754 4727 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.604076 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-kube-api-access-hbqwf" (OuterVolumeSpecName: "kube-api-access-hbqwf") pod "4d5b5d37-a53b-4d18-ac63-f53e823dab2c" (UID: "4d5b5d37-a53b-4d18-ac63-f53e823dab2c"). InnerVolumeSpecName "kube-api-access-hbqwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.604090 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4d5b5d37-a53b-4d18-ac63-f53e823dab2c" (UID: "4d5b5d37-a53b-4d18-ac63-f53e823dab2c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.604130 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4d5b5d37-a53b-4d18-ac63-f53e823dab2c" (UID: "4d5b5d37-a53b-4d18-ac63-f53e823dab2c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.634427 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-spj24" event={"ID":"be3adfa2-622c-432c-b9e7-7530926b2ec4","Type":"ContainerDied","Data":"0b94209f18beb157d6f4f9dbd1b778d4856264a0b62dd95ea36f8156623b165a"} Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.634492 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b94209f18beb157d6f4f9dbd1b778d4856264a0b62dd95ea36f8156623b165a" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.634462 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-spj24" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.636925 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.636927 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs87m" event={"ID":"14bdeda2-a618-4f47-ae48-87a7c611401e","Type":"ContainerDied","Data":"caf725d3ef2b173e2aa4692163f0690ae3f4e2b567beb0d255b9dbddde1bf0ca"} Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.636992 4727 scope.go:117] "RemoveContainer" containerID="ae9aac57a0af8f07a45f31f441084e53370dbc27fee31ad29728a72cb262d25b" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.638219 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-777bc7647-cdqwj_4d5b5d37-a53b-4d18-ac63-f53e823dab2c/console/0.log" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.638276 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-777bc7647-cdqwj" event={"ID":"4d5b5d37-a53b-4d18-ac63-f53e823dab2c","Type":"ContainerDied","Data":"87278ac7aba7dd967de12712f206fc4a7ddaf5955bd22197563739658400a094"} Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.638360 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-777bc7647-cdqwj" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.640082 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8ec8-account-create-26qv7" event={"ID":"79b69708-cf69-43e5-9299-bbe3fb5b72f4","Type":"ContainerDied","Data":"548eea13c1d8ab641e71b585b4ff40dc34a1defe0acfdce54b8e96c79718887d"} Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.640110 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548eea13c1d8ab641e71b585b4ff40dc34a1defe0acfdce54b8e96c79718887d" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.640145 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ec8-account-create-26qv7" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.642666 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vrqcf" event={"ID":"bf989668-3fb7-491d-abf0-e2991e327690","Type":"ContainerStarted","Data":"919036f94f3a96783cd975b0acc08b60fe5a835a07293243e5212935a00c525d"} Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.667071 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vrqcf" podStartSLOduration=1.390646589 podStartE2EDuration="7.667052476s" podCreationTimestamp="2025-11-21 20:25:43 +0000 UTC" firstStartedPulling="2025-11-21 20:25:43.922302253 +0000 UTC m=+1149.108487297" lastFinishedPulling="2025-11-21 20:25:50.19870814 +0000 UTC m=+1155.384893184" observedRunningTime="2025-11-21 20:25:50.663220571 +0000 UTC m=+1155.849405615" watchObservedRunningTime="2025-11-21 20:25:50.667052476 +0000 UTC m=+1155.853237510" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.681364 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs87m"] Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.690766 4727 scope.go:117] "RemoveContainer" containerID="8bf93c7428e1f62c9c25d622de36721cb9ab8c1a37101cec0965f7dc46596bdd" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.701902 4727 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.702310 4727 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.702346 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbqwf\" (UniqueName: \"kubernetes.io/projected/4d5b5d37-a53b-4d18-ac63-f53e823dab2c-kube-api-access-hbqwf\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.709596 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs87m"] Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.720631 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-777bc7647-cdqwj"] Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.728263 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-777bc7647-cdqwj"] Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.740229 4727 scope.go:117] "RemoveContainer" containerID="f9d24eaa330e498f4977caf8a713f76ce84d446343a6b7a1e0b61e7c651d57db" Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.777505 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fbkmd"] Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.788393 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8b1a-account-create-gf7rm"] Nov 21 20:25:50 crc kubenswrapper[4727]: W1121 20:25:50.788652 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3aca9be_2c44_48aa_b561_ad796cee0014.slice/crio-162aeacfd2cae51fb0429a9f5bdb2d57205353f5f43e57541e957d345dd8d92a WatchSource:0}: Error finding container 162aeacfd2cae51fb0429a9f5bdb2d57205353f5f43e57541e957d345dd8d92a: Status 404 returned error can't find the container with id 162aeacfd2cae51fb0429a9f5bdb2d57205353f5f43e57541e957d345dd8d92a Nov 21 20:25:50 crc kubenswrapper[4727]: W1121 20:25:50.792451 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod810aff49_a935_4a69_a3ad_2c26ab66ead0.slice/crio-6b23e9f20a7fa1ff07d692996767a918e2d9ae2a6dd0673fcaaf95692382c346 WatchSource:0}: Error finding container 6b23e9f20a7fa1ff07d692996767a918e2d9ae2a6dd0673fcaaf95692382c346: Status 404 returned error can't find the container with id 6b23e9f20a7fa1ff07d692996767a918e2d9ae2a6dd0673fcaaf95692382c346 Nov 21 20:25:50 crc kubenswrapper[4727]: I1121 20:25:50.796365 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ec7d-account-create-lzk75"] Nov 21 20:25:50 crc kubenswrapper[4727]: W1121 20:25:50.797251 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12ed8782_7884_43ef_98dc_7889dc0c4429.slice/crio-1c12cebc6c829ef5f375feef3bfc34a897647aa0cbdfbd8e28bd1fd4856763e2 WatchSource:0}: Error finding container 1c12cebc6c829ef5f375feef3bfc34a897647aa0cbdfbd8e28bd1fd4856763e2: Status 404 returned error can't find the container with id 1c12cebc6c829ef5f375feef3bfc34a897647aa0cbdfbd8e28bd1fd4856763e2 Nov 21 20:25:51 crc kubenswrapper[4727]: W1121 20:25:51.012131 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44911d7b_3990_408a_9143_c4735c2e2b0b.slice/crio-d8a703e24416869b7ab1064978c6a5aeb237833ddd7419da322c22077c7f20ee WatchSource:0}: Error finding container d8a703e24416869b7ab1064978c6a5aeb237833ddd7419da322c22077c7f20ee: Status 404 returned error can't find the container with id d8a703e24416869b7ab1064978c6a5aeb237833ddd7419da322c22077c7f20ee Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.017091 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7"] Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.024630 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cm9v5"] Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.032181 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-8738-account-create-8br5t"] Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.447028 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rpv9s"] Nov 21 20:25:51 crc kubenswrapper[4727]: E1121 20:25:51.447678 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14bdeda2-a618-4f47-ae48-87a7c611401e" containerName="dnsmasq-dns" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.447694 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="14bdeda2-a618-4f47-ae48-87a7c611401e" containerName="dnsmasq-dns" Nov 21 20:25:51 crc kubenswrapper[4727]: E1121 20:25:51.447715 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14bdeda2-a618-4f47-ae48-87a7c611401e" containerName="init" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.447722 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="14bdeda2-a618-4f47-ae48-87a7c611401e" containerName="init" Nov 21 20:25:51 crc kubenswrapper[4727]: E1121 20:25:51.447744 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3adfa2-622c-432c-b9e7-7530926b2ec4" containerName="mariadb-database-create" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.447750 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3adfa2-622c-432c-b9e7-7530926b2ec4" containerName="mariadb-database-create" Nov 21 20:25:51 crc kubenswrapper[4727]: E1121 20:25:51.447762 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5b5d37-a53b-4d18-ac63-f53e823dab2c" containerName="console" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.447768 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5b5d37-a53b-4d18-ac63-f53e823dab2c" containerName="console" Nov 21 20:25:51 crc kubenswrapper[4727]: E1121 20:25:51.447783 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b69708-cf69-43e5-9299-bbe3fb5b72f4" containerName="mariadb-account-create" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.447789 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b69708-cf69-43e5-9299-bbe3fb5b72f4" containerName="mariadb-account-create" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.447973 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="14bdeda2-a618-4f47-ae48-87a7c611401e" containerName="dnsmasq-dns" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.447997 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5b5d37-a53b-4d18-ac63-f53e823dab2c" containerName="console" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.448006 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b69708-cf69-43e5-9299-bbe3fb5b72f4" containerName="mariadb-account-create" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.448014 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="be3adfa2-622c-432c-b9e7-7530926b2ec4" containerName="mariadb-database-create" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.448664 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.453376 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-24s9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.455821 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.464127 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rpv9s"] Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.519600 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14bdeda2-a618-4f47-ae48-87a7c611401e" path="/var/lib/kubelet/pods/14bdeda2-a618-4f47-ae48-87a7c611401e/volumes" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.519784 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-db-sync-config-data\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.519939 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f7vr\" (UniqueName: \"kubernetes.io/projected/e0672730-e181-488f-8472-a20c75dcb285-kube-api-access-2f7vr\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.519982 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-combined-ca-bundle\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.520000 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-config-data\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.520318 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5b5d37-a53b-4d18-ac63-f53e823dab2c" path="/var/lib/kubelet/pods/4d5b5d37-a53b-4d18-ac63-f53e823dab2c/volumes" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.621823 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f7vr\" (UniqueName: \"kubernetes.io/projected/e0672730-e181-488f-8472-a20c75dcb285-kube-api-access-2f7vr\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.621874 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-combined-ca-bundle\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.621900 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-config-data\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.621972 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-db-sync-config-data\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.626944 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-config-data\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.629982 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-db-sync-config-data\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.630447 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-combined-ca-bundle\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.641589 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f7vr\" (UniqueName: \"kubernetes.io/projected/e0672730-e181-488f-8472-a20c75dcb285-kube-api-access-2f7vr\") pod \"glance-db-sync-rpv9s\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.656802 4727 generic.go:334] "Generic (PLEG): container finished" podID="d0343619-7950-468b-8f82-146e68397de5" containerID="047a3507d50b64b70668af59fd85f0862ca6994c638c99feb1f7428d2a7a65cf" exitCode=0 Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.656863 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8738-account-create-8br5t" event={"ID":"d0343619-7950-468b-8f82-146e68397de5","Type":"ContainerDied","Data":"047a3507d50b64b70668af59fd85f0862ca6994c638c99feb1f7428d2a7a65cf"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.656888 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8738-account-create-8br5t" event={"ID":"d0343619-7950-468b-8f82-146e68397de5","Type":"ContainerStarted","Data":"685644b76facb3ee8fb576b192dc8ccf4db7d8459097c08dd1eeef1942defef3"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.661041 4727 generic.go:334] "Generic (PLEG): container finished" podID="74333548-19eb-4237-8b46-2fb41cc613ae" containerID="757dc795dea2449240dde0c8a170fce2c2b4c8abe40bde11b75791208f31fe02" exitCode=0 Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.661120 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cm9v5" event={"ID":"74333548-19eb-4237-8b46-2fb41cc613ae","Type":"ContainerDied","Data":"757dc795dea2449240dde0c8a170fce2c2b4c8abe40bde11b75791208f31fe02"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.661148 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cm9v5" event={"ID":"74333548-19eb-4237-8b46-2fb41cc613ae","Type":"ContainerStarted","Data":"efc45f8dc2bfb9ed94c43bd606dd3f5006072a5dc6fb1c6d7f09674b612d2c25"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.663719 4727 generic.go:334] "Generic (PLEG): container finished" podID="a3aca9be-2c44-48aa-b561-ad796cee0014" containerID="346afd2a27e8cc2bf3b553ed228627ae9af773e2eb3fabdff002aa269cb95aff" exitCode=0 Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.663795 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fbkmd" event={"ID":"a3aca9be-2c44-48aa-b561-ad796cee0014","Type":"ContainerDied","Data":"346afd2a27e8cc2bf3b553ed228627ae9af773e2eb3fabdff002aa269cb95aff"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.663835 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fbkmd" event={"ID":"a3aca9be-2c44-48aa-b561-ad796cee0014","Type":"ContainerStarted","Data":"162aeacfd2cae51fb0429a9f5bdb2d57205353f5f43e57541e957d345dd8d92a"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.665539 4727 generic.go:334] "Generic (PLEG): container finished" podID="44911d7b-3990-408a-9143-c4735c2e2b0b" containerID="e3fcd2d247363cd1f71bbfbf409c9fe070c88cee775fb3b0592b4f97bbe0180f" exitCode=0 Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.665587 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" event={"ID":"44911d7b-3990-408a-9143-c4735c2e2b0b","Type":"ContainerDied","Data":"e3fcd2d247363cd1f71bbfbf409c9fe070c88cee775fb3b0592b4f97bbe0180f"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.665602 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" event={"ID":"44911d7b-3990-408a-9143-c4735c2e2b0b","Type":"ContainerStarted","Data":"d8a703e24416869b7ab1064978c6a5aeb237833ddd7419da322c22077c7f20ee"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.668547 4727 generic.go:334] "Generic (PLEG): container finished" podID="810aff49-a935-4a69-a3ad-2c26ab66ead0" containerID="2714fc1dd88715afa9668fd952f36430b4de018be177301aa6e994633755df1d" exitCode=0 Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.668596 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b1a-account-create-gf7rm" event={"ID":"810aff49-a935-4a69-a3ad-2c26ab66ead0","Type":"ContainerDied","Data":"2714fc1dd88715afa9668fd952f36430b4de018be177301aa6e994633755df1d"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.668614 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b1a-account-create-gf7rm" event={"ID":"810aff49-a935-4a69-a3ad-2c26ab66ead0","Type":"ContainerStarted","Data":"6b23e9f20a7fa1ff07d692996767a918e2d9ae2a6dd0673fcaaf95692382c346"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.676340 4727 generic.go:334] "Generic (PLEG): container finished" podID="12ed8782-7884-43ef-98dc-7889dc0c4429" containerID="01e0394c13c7cebef5eb7403e7c0ebfae2acea8d43e1b9af2fb5d19f89a76801" exitCode=0 Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.676416 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec7d-account-create-lzk75" event={"ID":"12ed8782-7884-43ef-98dc-7889dc0c4429","Type":"ContainerDied","Data":"01e0394c13c7cebef5eb7403e7c0ebfae2acea8d43e1b9af2fb5d19f89a76801"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.676439 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec7d-account-create-lzk75" event={"ID":"12ed8782-7884-43ef-98dc-7889dc0c4429","Type":"ContainerStarted","Data":"1c12cebc6c829ef5f375feef3bfc34a897647aa0cbdfbd8e28bd1fd4856763e2"} Nov 21 20:25:51 crc kubenswrapper[4727]: I1121 20:25:51.782863 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rpv9s" Nov 21 20:25:52 crc kubenswrapper[4727]: I1121 20:25:52.324029 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rpv9s"] Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.618209 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.634510 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b1a-account-create-gf7rm" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.645763 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fbkmd" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.652772 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8738-account-create-8br5t" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.660422 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cm9v5" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.666850 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec7d-account-create-lzk75" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692374 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74333548-19eb-4237-8b46-2fb41cc613ae-operator-scripts\") pod \"74333548-19eb-4237-8b46-2fb41cc613ae\" (UID: \"74333548-19eb-4237-8b46-2fb41cc613ae\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692417 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ed8782-7884-43ef-98dc-7889dc0c4429-operator-scripts\") pod \"12ed8782-7884-43ef-98dc-7889dc0c4429\" (UID: \"12ed8782-7884-43ef-98dc-7889dc0c4429\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692449 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44911d7b-3990-408a-9143-c4735c2e2b0b-operator-scripts\") pod \"44911d7b-3990-408a-9143-c4735c2e2b0b\" (UID: \"44911d7b-3990-408a-9143-c4735c2e2b0b\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692477 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk4xw\" (UniqueName: \"kubernetes.io/projected/a3aca9be-2c44-48aa-b561-ad796cee0014-kube-api-access-wk4xw\") pod \"a3aca9be-2c44-48aa-b561-ad796cee0014\" (UID: \"a3aca9be-2c44-48aa-b561-ad796cee0014\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692497 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzmzf\" (UniqueName: \"kubernetes.io/projected/12ed8782-7884-43ef-98dc-7889dc0c4429-kube-api-access-zzmzf\") pod \"12ed8782-7884-43ef-98dc-7889dc0c4429\" (UID: \"12ed8782-7884-43ef-98dc-7889dc0c4429\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692550 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0343619-7950-468b-8f82-146e68397de5-operator-scripts\") pod \"d0343619-7950-468b-8f82-146e68397de5\" (UID: \"d0343619-7950-468b-8f82-146e68397de5\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692597 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/810aff49-a935-4a69-a3ad-2c26ab66ead0-operator-scripts\") pod \"810aff49-a935-4a69-a3ad-2c26ab66ead0\" (UID: \"810aff49-a935-4a69-a3ad-2c26ab66ead0\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692645 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ghdb\" (UniqueName: \"kubernetes.io/projected/d0343619-7950-468b-8f82-146e68397de5-kube-api-access-8ghdb\") pod \"d0343619-7950-468b-8f82-146e68397de5\" (UID: \"d0343619-7950-468b-8f82-146e68397de5\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692672 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3aca9be-2c44-48aa-b561-ad796cee0014-operator-scripts\") pod \"a3aca9be-2c44-48aa-b561-ad796cee0014\" (UID: \"a3aca9be-2c44-48aa-b561-ad796cee0014\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692732 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrng9\" (UniqueName: \"kubernetes.io/projected/44911d7b-3990-408a-9143-c4735c2e2b0b-kube-api-access-lrng9\") pod \"44911d7b-3990-408a-9143-c4735c2e2b0b\" (UID: \"44911d7b-3990-408a-9143-c4735c2e2b0b\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692788 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lq26\" (UniqueName: \"kubernetes.io/projected/810aff49-a935-4a69-a3ad-2c26ab66ead0-kube-api-access-9lq26\") pod \"810aff49-a935-4a69-a3ad-2c26ab66ead0\" (UID: \"810aff49-a935-4a69-a3ad-2c26ab66ead0\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.692822 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/74333548-19eb-4237-8b46-2fb41cc613ae-kube-api-access-c94f4\") pod \"74333548-19eb-4237-8b46-2fb41cc613ae\" (UID: \"74333548-19eb-4237-8b46-2fb41cc613ae\") " Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.693780 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0343619-7950-468b-8f82-146e68397de5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0343619-7950-468b-8f82-146e68397de5" (UID: "d0343619-7950-468b-8f82-146e68397de5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.694113 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74333548-19eb-4237-8b46-2fb41cc613ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74333548-19eb-4237-8b46-2fb41cc613ae" (UID: "74333548-19eb-4237-8b46-2fb41cc613ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.694291 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44911d7b-3990-408a-9143-c4735c2e2b0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44911d7b-3990-408a-9143-c4735c2e2b0b" (UID: "44911d7b-3990-408a-9143-c4735c2e2b0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.694515 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12ed8782-7884-43ef-98dc-7889dc0c4429-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12ed8782-7884-43ef-98dc-7889dc0c4429" (UID: "12ed8782-7884-43ef-98dc-7889dc0c4429"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.694738 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3aca9be-2c44-48aa-b561-ad796cee0014-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3aca9be-2c44-48aa-b561-ad796cee0014" (UID: "a3aca9be-2c44-48aa-b561-ad796cee0014"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.695016 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810aff49-a935-4a69-a3ad-2c26ab66ead0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "810aff49-a935-4a69-a3ad-2c26ab66ead0" (UID: "810aff49-a935-4a69-a3ad-2c26ab66ead0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.702937 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3aca9be-2c44-48aa-b561-ad796cee0014-kube-api-access-wk4xw" (OuterVolumeSpecName: "kube-api-access-wk4xw") pod "a3aca9be-2c44-48aa-b561-ad796cee0014" (UID: "a3aca9be-2c44-48aa-b561-ad796cee0014"). InnerVolumeSpecName "kube-api-access-wk4xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.703852 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ed8782-7884-43ef-98dc-7889dc0c4429-kube-api-access-zzmzf" (OuterVolumeSpecName: "kube-api-access-zzmzf") pod "12ed8782-7884-43ef-98dc-7889dc0c4429" (UID: "12ed8782-7884-43ef-98dc-7889dc0c4429"). InnerVolumeSpecName "kube-api-access-zzmzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.705108 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74333548-19eb-4237-8b46-2fb41cc613ae-kube-api-access-c94f4" (OuterVolumeSpecName: "kube-api-access-c94f4") pod "74333548-19eb-4237-8b46-2fb41cc613ae" (UID: "74333548-19eb-4237-8b46-2fb41cc613ae"). InnerVolumeSpecName "kube-api-access-c94f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.705335 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0343619-7950-468b-8f82-146e68397de5-kube-api-access-8ghdb" (OuterVolumeSpecName: "kube-api-access-8ghdb") pod "d0343619-7950-468b-8f82-146e68397de5" (UID: "d0343619-7950-468b-8f82-146e68397de5"). InnerVolumeSpecName "kube-api-access-8ghdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.718495 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44911d7b-3990-408a-9143-c4735c2e2b0b-kube-api-access-lrng9" (OuterVolumeSpecName: "kube-api-access-lrng9") pod "44911d7b-3990-408a-9143-c4735c2e2b0b" (UID: "44911d7b-3990-408a-9143-c4735c2e2b0b"). InnerVolumeSpecName "kube-api-access-lrng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.723192 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810aff49-a935-4a69-a3ad-2c26ab66ead0-kube-api-access-9lq26" (OuterVolumeSpecName: "kube-api-access-9lq26") pod "810aff49-a935-4a69-a3ad-2c26ab66ead0" (UID: "810aff49-a935-4a69-a3ad-2c26ab66ead0"). InnerVolumeSpecName "kube-api-access-9lq26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.825012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rpv9s" event={"ID":"e0672730-e181-488f-8472-a20c75dcb285","Type":"ContainerStarted","Data":"43635039285fa40bbed51545a6aac04365ccf5445cb08aa55fee7660bf4d089f"} Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854602 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3aca9be-2c44-48aa-b561-ad796cee0014-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854661 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrng9\" (UniqueName: \"kubernetes.io/projected/44911d7b-3990-408a-9143-c4735c2e2b0b-kube-api-access-lrng9\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854678 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lq26\" (UniqueName: \"kubernetes.io/projected/810aff49-a935-4a69-a3ad-2c26ab66ead0-kube-api-access-9lq26\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854691 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c94f4\" (UniqueName: \"kubernetes.io/projected/74333548-19eb-4237-8b46-2fb41cc613ae-kube-api-access-c94f4\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854709 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74333548-19eb-4237-8b46-2fb41cc613ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854722 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12ed8782-7884-43ef-98dc-7889dc0c4429-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854735 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44911d7b-3990-408a-9143-c4735c2e2b0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854749 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzmzf\" (UniqueName: \"kubernetes.io/projected/12ed8782-7884-43ef-98dc-7889dc0c4429-kube-api-access-zzmzf\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854766 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk4xw\" (UniqueName: \"kubernetes.io/projected/a3aca9be-2c44-48aa-b561-ad796cee0014-kube-api-access-wk4xw\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854781 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0343619-7950-468b-8f82-146e68397de5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854795 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/810aff49-a935-4a69-a3ad-2c26ab66ead0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.854808 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ghdb\" (UniqueName: \"kubernetes.io/projected/d0343619-7950-468b-8f82-146e68397de5-kube-api-access-8ghdb\") on node \"crc\" DevicePath \"\"" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.856272 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" event={"ID":"44911d7b-3990-408a-9143-c4735c2e2b0b","Type":"ContainerDied","Data":"d8a703e24416869b7ab1064978c6a5aeb237833ddd7419da322c22077c7f20ee"} Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.856317 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a703e24416869b7ab1064978c6a5aeb237833ddd7419da322c22077c7f20ee" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.856423 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.872069 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8b1a-account-create-gf7rm" event={"ID":"810aff49-a935-4a69-a3ad-2c26ab66ead0","Type":"ContainerDied","Data":"6b23e9f20a7fa1ff07d692996767a918e2d9ae2a6dd0673fcaaf95692382c346"} Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.872121 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b23e9f20a7fa1ff07d692996767a918e2d9ae2a6dd0673fcaaf95692382c346" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.872250 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8b1a-account-create-gf7rm" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.888692 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec7d-account-create-lzk75" event={"ID":"12ed8782-7884-43ef-98dc-7889dc0c4429","Type":"ContainerDied","Data":"1c12cebc6c829ef5f375feef3bfc34a897647aa0cbdfbd8e28bd1fd4856763e2"} Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.888740 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c12cebc6c829ef5f375feef3bfc34a897647aa0cbdfbd8e28bd1fd4856763e2" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.888840 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec7d-account-create-lzk75" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.914992 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9","Type":"ContainerStarted","Data":"9237dac257fd4e84599e61d55a85523edc75e3d272e39f834062f890ff3fe9cc"} Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.938754 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8738-account-create-8br5t" event={"ID":"d0343619-7950-468b-8f82-146e68397de5","Type":"ContainerDied","Data":"685644b76facb3ee8fb576b192dc8ccf4db7d8459097c08dd1eeef1942defef3"} Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.938808 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685644b76facb3ee8fb576b192dc8ccf4db7d8459097c08dd1eeef1942defef3" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.938884 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8738-account-create-8br5t" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.942187 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cm9v5" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.942192 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cm9v5" event={"ID":"74333548-19eb-4237-8b46-2fb41cc613ae","Type":"ContainerDied","Data":"efc45f8dc2bfb9ed94c43bd606dd3f5006072a5dc6fb1c6d7f09674b612d2c25"} Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.942233 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efc45f8dc2bfb9ed94c43bd606dd3f5006072a5dc6fb1c6d7f09674b612d2c25" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.944898 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fbkmd" event={"ID":"a3aca9be-2c44-48aa-b561-ad796cee0014","Type":"ContainerDied","Data":"162aeacfd2cae51fb0429a9f5bdb2d57205353f5f43e57541e957d345dd8d92a"} Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.944929 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="162aeacfd2cae51fb0429a9f5bdb2d57205353f5f43e57541e957d345dd8d92a" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.945012 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fbkmd" Nov 21 20:25:53 crc kubenswrapper[4727]: I1121 20:25:53.988951 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=12.921371768 podStartE2EDuration="56.988931831s" podCreationTimestamp="2025-11-21 20:24:57 +0000 UTC" firstStartedPulling="2025-11-21 20:25:09.403005601 +0000 UTC m=+1114.589190645" lastFinishedPulling="2025-11-21 20:25:53.470565664 +0000 UTC m=+1158.656750708" observedRunningTime="2025-11-21 20:25:53.967483913 +0000 UTC m=+1159.153699237" watchObservedRunningTime="2025-11-21 20:25:53.988931831 +0000 UTC m=+1159.175116875" Nov 21 20:25:54 crc kubenswrapper[4727]: I1121 20:25:54.302696 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 21 20:25:55 crc kubenswrapper[4727]: I1121 20:25:55.079883 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:25:55 crc kubenswrapper[4727]: E1121 20:25:55.080129 4727 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 20:25:55 crc kubenswrapper[4727]: E1121 20:25:55.080583 4727 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 20:25:55 crc kubenswrapper[4727]: E1121 20:25:55.080632 4727 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift podName:29faf340-95f4-4bd3-bd87-f2e971a0e494 nodeName:}" failed. No retries permitted until 2025-11-21 20:26:11.080617142 +0000 UTC m=+1176.266802186 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift") pod "swift-storage-0" (UID: "29faf340-95f4-4bd3-bd87-f2e971a0e494") : configmap "swift-ring-files" not found Nov 21 20:25:57 crc kubenswrapper[4727]: I1121 20:25:57.920715 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 21 20:25:57 crc kubenswrapper[4727]: I1121 20:25:57.982209 4727 generic.go:334] "Generic (PLEG): container finished" podID="7fdf0962-de4d-4f58-87d3-a6458e4ff980" containerID="1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa" exitCode=0 Nov 21 20:25:57 crc kubenswrapper[4727]: I1121 20:25:57.982303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7fdf0962-de4d-4f58-87d3-a6458e4ff980","Type":"ContainerDied","Data":"1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa"} Nov 21 20:25:57 crc kubenswrapper[4727]: I1121 20:25:57.989472 4727 generic.go:334] "Generic (PLEG): container finished" podID="377d8548-a458-47c0-bd02-9904c8110d40" containerID="510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2" exitCode=0 Nov 21 20:25:57 crc kubenswrapper[4727]: I1121 20:25:57.989589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"377d8548-a458-47c0-bd02-9904c8110d40","Type":"ContainerDied","Data":"510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2"} Nov 21 20:25:57 crc kubenswrapper[4727]: I1121 20:25:57.996669 4727 generic.go:334] "Generic (PLEG): container finished" podID="bf989668-3fb7-491d-abf0-e2991e327690" containerID="919036f94f3a96783cd975b0acc08b60fe5a835a07293243e5212935a00c525d" exitCode=0 Nov 21 20:25:57 crc kubenswrapper[4727]: I1121 20:25:57.996840 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vrqcf" event={"ID":"bf989668-3fb7-491d-abf0-e2991e327690","Type":"ContainerDied","Data":"919036f94f3a96783cd975b0acc08b60fe5a835a07293243e5212935a00c525d"} Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.746730 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 20:25:58 crc kubenswrapper[4727]: E1121 20:25:58.747621 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0343619-7950-468b-8f82-146e68397de5" containerName="mariadb-account-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.747639 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0343619-7950-468b-8f82-146e68397de5" containerName="mariadb-account-create" Nov 21 20:25:58 crc kubenswrapper[4727]: E1121 20:25:58.747651 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810aff49-a935-4a69-a3ad-2c26ab66ead0" containerName="mariadb-account-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.747656 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="810aff49-a935-4a69-a3ad-2c26ab66ead0" containerName="mariadb-account-create" Nov 21 20:25:58 crc kubenswrapper[4727]: E1121 20:25:58.747672 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ed8782-7884-43ef-98dc-7889dc0c4429" containerName="mariadb-account-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.747677 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ed8782-7884-43ef-98dc-7889dc0c4429" containerName="mariadb-account-create" Nov 21 20:25:58 crc kubenswrapper[4727]: E1121 20:25:58.747690 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3aca9be-2c44-48aa-b561-ad796cee0014" containerName="mariadb-database-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.747696 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3aca9be-2c44-48aa-b561-ad796cee0014" containerName="mariadb-database-create" Nov 21 20:25:58 crc kubenswrapper[4727]: E1121 20:25:58.747708 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74333548-19eb-4237-8b46-2fb41cc613ae" containerName="mariadb-database-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.747716 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="74333548-19eb-4237-8b46-2fb41cc613ae" containerName="mariadb-database-create" Nov 21 20:25:58 crc kubenswrapper[4727]: E1121 20:25:58.747732 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44911d7b-3990-408a-9143-c4735c2e2b0b" containerName="mariadb-database-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.747739 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="44911d7b-3990-408a-9143-c4735c2e2b0b" containerName="mariadb-database-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.747911 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3aca9be-2c44-48aa-b561-ad796cee0014" containerName="mariadb-database-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.747928 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ed8782-7884-43ef-98dc-7889dc0c4429" containerName="mariadb-account-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.747936 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="74333548-19eb-4237-8b46-2fb41cc613ae" containerName="mariadb-database-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.747944 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0343619-7950-468b-8f82-146e68397de5" containerName="mariadb-account-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.750108 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="810aff49-a935-4a69-a3ad-2c26ab66ead0" containerName="mariadb-account-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.750187 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="44911d7b-3990-408a-9143-c4735c2e2b0b" containerName="mariadb-database-create" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.751067 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.753215 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.770004 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.871110 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-config-data\") pod \"mysqld-exporter-0\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " pod="openstack/mysqld-exporter-0" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.871931 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " pod="openstack/mysqld-exporter-0" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.872069 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tp9s\" (UniqueName: \"kubernetes.io/projected/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-kube-api-access-7tp9s\") pod \"mysqld-exporter-0\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " pod="openstack/mysqld-exporter-0" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.977685 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-config-data\") pod \"mysqld-exporter-0\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " pod="openstack/mysqld-exporter-0" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.977997 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " pod="openstack/mysqld-exporter-0" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.978034 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tp9s\" (UniqueName: \"kubernetes.io/projected/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-kube-api-access-7tp9s\") pod \"mysqld-exporter-0\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " pod="openstack/mysqld-exporter-0" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.989434 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-config-data\") pod \"mysqld-exporter-0\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " pod="openstack/mysqld-exporter-0" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.989523 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " pod="openstack/mysqld-exporter-0" Nov 21 20:25:58 crc kubenswrapper[4727]: I1121 20:25:58.995685 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tp9s\" (UniqueName: \"kubernetes.io/projected/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-kube-api-access-7tp9s\") pod \"mysqld-exporter-0\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " pod="openstack/mysqld-exporter-0" Nov 21 20:25:59 crc kubenswrapper[4727]: I1121 20:25:59.009704 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"377d8548-a458-47c0-bd02-9904c8110d40","Type":"ContainerStarted","Data":"7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1"} Nov 21 20:25:59 crc kubenswrapper[4727]: I1121 20:25:59.010064 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 21 20:25:59 crc kubenswrapper[4727]: I1121 20:25:59.013372 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7fdf0962-de4d-4f58-87d3-a6458e4ff980","Type":"ContainerStarted","Data":"e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234"} Nov 21 20:25:59 crc kubenswrapper[4727]: I1121 20:25:59.013607 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:25:59 crc kubenswrapper[4727]: I1121 20:25:59.068726 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 20:25:59 crc kubenswrapper[4727]: I1121 20:25:59.077807 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.461894844 podStartE2EDuration="1m8.077787015s" podCreationTimestamp="2025-11-21 20:24:51 +0000 UTC" firstStartedPulling="2025-11-21 20:25:08.176007584 +0000 UTC m=+1113.362192628" lastFinishedPulling="2025-11-21 20:25:21.791899755 +0000 UTC m=+1126.978084799" observedRunningTime="2025-11-21 20:25:59.047458237 +0000 UTC m=+1164.233643271" watchObservedRunningTime="2025-11-21 20:25:59.077787015 +0000 UTC m=+1164.263972059" Nov 21 20:25:59 crc kubenswrapper[4727]: I1121 20:25:59.303329 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 21 20:25:59 crc kubenswrapper[4727]: I1121 20:25:59.307007 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 21 20:25:59 crc kubenswrapper[4727]: I1121 20:25:59.354237 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.202177534 podStartE2EDuration="1m8.354215699s" podCreationTimestamp="2025-11-21 20:24:51 +0000 UTC" firstStartedPulling="2025-11-21 20:25:09.035309587 +0000 UTC m=+1114.221494631" lastFinishedPulling="2025-11-21 20:25:21.187347752 +0000 UTC m=+1126.373532796" observedRunningTime="2025-11-21 20:25:59.085334071 +0000 UTC m=+1164.271519115" watchObservedRunningTime="2025-11-21 20:25:59.354215699 +0000 UTC m=+1164.540400753" Nov 21 20:25:59 crc kubenswrapper[4727]: I1121 20:25:59.672783 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 20:26:00 crc kubenswrapper[4727]: I1121 20:26:00.023377 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:01 crc kubenswrapper[4727]: I1121 20:26:01.942596 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:26:01 crc kubenswrapper[4727]: I1121 20:26:01.952991 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w77gs" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.234444 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-k8fk5-config-vg7c2"] Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.236539 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.246393 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.252769 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k8fk5-config-vg7c2"] Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.371397 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.371452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-additional-scripts\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.371498 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run-ovn\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.371576 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-scripts\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.372978 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcr66\" (UniqueName: \"kubernetes.io/projected/40f9cd99-3a9b-4287-9857-199b11d60281-kube-api-access-jcr66\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.373135 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-log-ovn\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.480357 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run-ovn\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.480618 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-scripts\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.480890 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcr66\" (UniqueName: \"kubernetes.io/projected/40f9cd99-3a9b-4287-9857-199b11d60281-kube-api-access-jcr66\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.481006 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-log-ovn\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.481080 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.481123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-additional-scripts\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.481930 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-additional-scripts\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.482508 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-log-ovn\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.482563 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run-ovn\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.482576 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.487335 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-scripts\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.516754 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcr66\" (UniqueName: \"kubernetes.io/projected/40f9cd99-3a9b-4287-9857-199b11d60281-kube-api-access-jcr66\") pod \"ovn-controller-k8fk5-config-vg7c2\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:02 crc kubenswrapper[4727]: I1121 20:26:02.570910 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:03 crc kubenswrapper[4727]: I1121 20:26:03.628369 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 20:26:03 crc kubenswrapper[4727]: I1121 20:26:03.628882 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="prometheus" containerID="cri-o://35535ba58a5198d1fa8adf03965f14564b087d25988ac8bdcdb7845859f4cc36" gracePeriod=600 Nov 21 20:26:03 crc kubenswrapper[4727]: I1121 20:26:03.629343 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="thanos-sidecar" containerID="cri-o://9237dac257fd4e84599e61d55a85523edc75e3d272e39f834062f890ff3fe9cc" gracePeriod=600 Nov 21 20:26:03 crc kubenswrapper[4727]: I1121 20:26:03.629396 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="config-reloader" containerID="cri-o://4700fc126259d977a4e2bff9636aae21565da7a6c14974ec5ad1f684fbb10b2b" gracePeriod=600 Nov 21 20:26:04 crc kubenswrapper[4727]: I1121 20:26:04.077940 4727 generic.go:334] "Generic (PLEG): container finished" podID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerID="9237dac257fd4e84599e61d55a85523edc75e3d272e39f834062f890ff3fe9cc" exitCode=0 Nov 21 20:26:04 crc kubenswrapper[4727]: I1121 20:26:04.077998 4727 generic.go:334] "Generic (PLEG): container finished" podID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerID="4700fc126259d977a4e2bff9636aae21565da7a6c14974ec5ad1f684fbb10b2b" exitCode=0 Nov 21 20:26:04 crc kubenswrapper[4727]: I1121 20:26:04.078006 4727 generic.go:334] "Generic (PLEG): container finished" podID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerID="35535ba58a5198d1fa8adf03965f14564b087d25988ac8bdcdb7845859f4cc36" exitCode=0 Nov 21 20:26:04 crc kubenswrapper[4727]: I1121 20:26:04.078028 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9","Type":"ContainerDied","Data":"9237dac257fd4e84599e61d55a85523edc75e3d272e39f834062f890ff3fe9cc"} Nov 21 20:26:04 crc kubenswrapper[4727]: I1121 20:26:04.078052 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9","Type":"ContainerDied","Data":"4700fc126259d977a4e2bff9636aae21565da7a6c14974ec5ad1f684fbb10b2b"} Nov 21 20:26:04 crc kubenswrapper[4727]: I1121 20:26:04.078061 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9","Type":"ContainerDied","Data":"35535ba58a5198d1fa8adf03965f14564b087d25988ac8bdcdb7845859f4cc36"} Nov 21 20:26:04 crc kubenswrapper[4727]: I1121 20:26:04.303691 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": dial tcp 10.217.0.139:9090: connect: connection refused" Nov 21 20:26:06 crc kubenswrapper[4727]: I1121 20:26:06.915685 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-k8fk5" podUID="5668e228-8946-468d-94e0-fa77489e46b3" containerName="ovn-controller" probeResult="failure" output=< Nov 21 20:26:06 crc kubenswrapper[4727]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 21 20:26:06 crc kubenswrapper[4727]: > Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.748264 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.915776 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-swiftconf\") pod \"bf989668-3fb7-491d-abf0-e2991e327690\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.916144 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-dispersionconf\") pod \"bf989668-3fb7-491d-abf0-e2991e327690\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.916203 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-scripts\") pod \"bf989668-3fb7-491d-abf0-e2991e327690\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.916301 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7c26\" (UniqueName: \"kubernetes.io/projected/bf989668-3fb7-491d-abf0-e2991e327690-kube-api-access-v7c26\") pod \"bf989668-3fb7-491d-abf0-e2991e327690\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.916515 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-combined-ca-bundle\") pod \"bf989668-3fb7-491d-abf0-e2991e327690\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.916637 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-ring-data-devices\") pod \"bf989668-3fb7-491d-abf0-e2991e327690\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.916709 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf989668-3fb7-491d-abf0-e2991e327690-etc-swift\") pod \"bf989668-3fb7-491d-abf0-e2991e327690\" (UID: \"bf989668-3fb7-491d-abf0-e2991e327690\") " Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.917877 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf989668-3fb7-491d-abf0-e2991e327690-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bf989668-3fb7-491d-abf0-e2991e327690" (UID: "bf989668-3fb7-491d-abf0-e2991e327690"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.926306 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf989668-3fb7-491d-abf0-e2991e327690-kube-api-access-v7c26" (OuterVolumeSpecName: "kube-api-access-v7c26") pod "bf989668-3fb7-491d-abf0-e2991e327690" (UID: "bf989668-3fb7-491d-abf0-e2991e327690"). InnerVolumeSpecName "kube-api-access-v7c26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.926902 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bf989668-3fb7-491d-abf0-e2991e327690" (UID: "bf989668-3fb7-491d-abf0-e2991e327690"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.946873 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bf989668-3fb7-491d-abf0-e2991e327690" (UID: "bf989668-3fb7-491d-abf0-e2991e327690"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.970340 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-scripts" (OuterVolumeSpecName: "scripts") pod "bf989668-3fb7-491d-abf0-e2991e327690" (UID: "bf989668-3fb7-491d-abf0-e2991e327690"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.972267 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf989668-3fb7-491d-abf0-e2991e327690" (UID: "bf989668-3fb7-491d-abf0-e2991e327690"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.988106 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bf989668-3fb7-491d-abf0-e2991e327690" (UID: "bf989668-3fb7-491d-abf0-e2991e327690"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:08 crc kubenswrapper[4727]: I1121 20:26:08.991637 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.022747 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-thanos-prometheus-http-client-file\") pod \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.022801 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config\") pod \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.022882 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-tls-assets\") pod \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.022911 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-prometheus-metric-storage-rulefiles-0\") pod \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.022937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mddm9\" (UniqueName: \"kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-kube-api-access-mddm9\") pod \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.023050 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-web-config\") pod \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.023075 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config-out\") pod \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.023201 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") pod \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\" (UID: \"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9\") " Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.023595 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.023606 4727 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.023615 4727 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf989668-3fb7-491d-abf0-e2991e327690-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.023624 4727 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.023631 4727 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf989668-3fb7-491d-abf0-e2991e327690-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.023640 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf989668-3fb7-491d-abf0-e2991e327690-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.023648 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7c26\" (UniqueName: \"kubernetes.io/projected/bf989668-3fb7-491d-abf0-e2991e327690-kube-api-access-v7c26\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.027949 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" (UID: "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.032060 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" (UID: "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.036901 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" (UID: "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.037623 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-kube-api-access-mddm9" (OuterVolumeSpecName: "kube-api-access-mddm9") pod "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" (UID: "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9"). InnerVolumeSpecName "kube-api-access-mddm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.037836 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config-out" (OuterVolumeSpecName: "config-out") pod "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" (UID: "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.051122 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config" (OuterVolumeSpecName: "config") pod "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" (UID: "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.052535 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" (UID: "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9"). InnerVolumeSpecName "pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.092867 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-web-config" (OuterVolumeSpecName: "web-config") pod "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" (UID: "6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.127040 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") on node \"crc\" " Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.127285 4727 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.127360 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.127420 4727 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.127493 4727 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.127565 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mddm9\" (UniqueName: \"kubernetes.io/projected/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-kube-api-access-mddm9\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.127631 4727 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-web-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.127697 4727 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9-config-out\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.178396 4727 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.178578 4727 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e") on node "crc" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.178775 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vrqcf" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.178968 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vrqcf" event={"ID":"bf989668-3fb7-491d-abf0-e2991e327690","Type":"ContainerDied","Data":"8af7e10c566f667abb81b849180834b5392de01d4a9cb932cda761b099ec2571"} Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.179001 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af7e10c566f667abb81b849180834b5392de01d4a9cb932cda761b099ec2571" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.198391 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.198481 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9","Type":"ContainerDied","Data":"b90d3e3a3d6e2a2ffac39a1a06436dbdeb46b9ad9f2cefb6cbdab9dd0657a43e"} Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.198551 4727 scope.go:117] "RemoveContainer" containerID="9237dac257fd4e84599e61d55a85523edc75e3d272e39f834062f890ff3fe9cc" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.213751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308","Type":"ContainerStarted","Data":"b114006573f1571332cadfd5c445188444200c10a050b3de8d1b626f8bc7e37b"} Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.228518 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-k8fk5-config-vg7c2"] Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.229834 4727 reconciler_common.go:293] "Volume detached for volume \"pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:09 crc kubenswrapper[4727]: W1121 20:26:09.288264 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40f9cd99_3a9b_4287_9857_199b11d60281.slice/crio-e74ba9b4aa88fd01c80777893070939c1cb4ceb914dbd5fa2fe5b60c2dff6b4e WatchSource:0}: Error finding container e74ba9b4aa88fd01c80777893070939c1cb4ceb914dbd5fa2fe5b60c2dff6b4e: Status 404 returned error can't find the container with id e74ba9b4aa88fd01c80777893070939c1cb4ceb914dbd5fa2fe5b60c2dff6b4e Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.300034 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.328670 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.340199 4727 scope.go:117] "RemoveContainer" containerID="4700fc126259d977a4e2bff9636aae21565da7a6c14974ec5ad1f684fbb10b2b" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.358240 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 20:26:09 crc kubenswrapper[4727]: E1121 20:26:09.361421 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf989668-3fb7-491d-abf0-e2991e327690" containerName="swift-ring-rebalance" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.361445 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf989668-3fb7-491d-abf0-e2991e327690" containerName="swift-ring-rebalance" Nov 21 20:26:09 crc kubenswrapper[4727]: E1121 20:26:09.361464 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="thanos-sidecar" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.361471 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="thanos-sidecar" Nov 21 20:26:09 crc kubenswrapper[4727]: E1121 20:26:09.361486 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="config-reloader" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.361494 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="config-reloader" Nov 21 20:26:09 crc kubenswrapper[4727]: E1121 20:26:09.361502 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="prometheus" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.361508 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="prometheus" Nov 21 20:26:09 crc kubenswrapper[4727]: E1121 20:26:09.361534 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="init-config-reloader" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.361540 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="init-config-reloader" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.361762 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="thanos-sidecar" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.361781 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf989668-3fb7-491d-abf0-e2991e327690" containerName="swift-ring-rebalance" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.361797 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="prometheus" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.361805 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" containerName="config-reloader" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.366396 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.387751 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.388026 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.388159 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.388266 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.388392 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.388426 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9k7cn" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.389562 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.395602 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.420144 4727 scope.go:117] "RemoveContainer" containerID="35535ba58a5198d1fa8adf03965f14564b087d25988ac8bdcdb7845859f4cc36" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.450991 4727 scope.go:117] "RemoveContainer" containerID="a60035ecfc2b7d39e6d8ab8abf474eb842e9c0917ab85f7855e713d1e2f4b603" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.518706 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9" path="/var/lib/kubelet/pods/6bd2bd10-1a46-43ba-9a88-a97a0be6a6a9/volumes" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545214 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545559 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545601 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545639 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545689 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnzkl\" (UniqueName: \"kubernetes.io/projected/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-kube-api-access-wnzkl\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545716 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-config\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545754 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545802 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545889 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545913 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.545937 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.647625 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.647708 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.647758 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.647812 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.647866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnzkl\" (UniqueName: \"kubernetes.io/projected/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-kube-api-access-wnzkl\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.647897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-config\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.647974 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.648013 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.648053 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.648078 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.648116 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.651659 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.654652 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.656482 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.659817 4727 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.662107 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9c78e1afd9d89ab6f94fe88434c0105238785281c928164a17724110dcc72275/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.661402 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.662782 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.662818 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.662794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.666908 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-config\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.667073 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.671668 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnzkl\" (UniqueName: \"kubernetes.io/projected/ff5d12dc-e65b-41f0-b29b-5c4eea0fada2-kube-api-access-wnzkl\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:09 crc kubenswrapper[4727]: I1121 20:26:09.708878 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a56d6d-2191-4030-b4ae-b2b366390f3e\") pod \"prometheus-metric-storage-0\" (UID: \"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2\") " pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:10 crc kubenswrapper[4727]: I1121 20:26:10.004190 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:10 crc kubenswrapper[4727]: I1121 20:26:10.226710 4727 generic.go:334] "Generic (PLEG): container finished" podID="40f9cd99-3a9b-4287-9857-199b11d60281" containerID="fcd4b4ef52041fa2b8beb81fe4ae14fd4298a2d480a2569b33222fa2be49d19f" exitCode=0 Nov 21 20:26:10 crc kubenswrapper[4727]: I1121 20:26:10.226776 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k8fk5-config-vg7c2" event={"ID":"40f9cd99-3a9b-4287-9857-199b11d60281","Type":"ContainerDied","Data":"fcd4b4ef52041fa2b8beb81fe4ae14fd4298a2d480a2569b33222fa2be49d19f"} Nov 21 20:26:10 crc kubenswrapper[4727]: I1121 20:26:10.226801 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k8fk5-config-vg7c2" event={"ID":"40f9cd99-3a9b-4287-9857-199b11d60281","Type":"ContainerStarted","Data":"e74ba9b4aa88fd01c80777893070939c1cb4ceb914dbd5fa2fe5b60c2dff6b4e"} Nov 21 20:26:10 crc kubenswrapper[4727]: I1121 20:26:10.228875 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rpv9s" event={"ID":"e0672730-e181-488f-8472-a20c75dcb285","Type":"ContainerStarted","Data":"5b39c31d65eca4802f74d79f0758849ca806f2ae4dd64e00ff1bec14fdeedf2e"} Nov 21 20:26:10 crc kubenswrapper[4727]: I1121 20:26:10.275191 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rpv9s" podStartSLOduration=4.047475142 podStartE2EDuration="19.275167119s" podCreationTimestamp="2025-11-21 20:25:51 +0000 UTC" firstStartedPulling="2025-11-21 20:25:53.432932846 +0000 UTC m=+1158.619117900" lastFinishedPulling="2025-11-21 20:26:08.660624813 +0000 UTC m=+1173.846809877" observedRunningTime="2025-11-21 20:26:10.271646432 +0000 UTC m=+1175.457831476" watchObservedRunningTime="2025-11-21 20:26:10.275167119 +0000 UTC m=+1175.461352163" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.177756 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.200664 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/29faf340-95f4-4bd3-bd87-f2e971a0e494-etc-swift\") pod \"swift-storage-0\" (UID: \"29faf340-95f4-4bd3-bd87-f2e971a0e494\") " pod="openstack/swift-storage-0" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.241869 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308","Type":"ContainerStarted","Data":"031674f12ea3df1cf447a01260fd7d4f6d5168022609689228fc1042f1223603"} Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.270110 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=11.396201519 podStartE2EDuration="13.270089013s" podCreationTimestamp="2025-11-21 20:25:58 +0000 UTC" firstStartedPulling="2025-11-21 20:26:08.559337405 +0000 UTC m=+1173.745522439" lastFinishedPulling="2025-11-21 20:26:10.433224889 +0000 UTC m=+1175.619409933" observedRunningTime="2025-11-21 20:26:11.262110644 +0000 UTC m=+1176.448295708" watchObservedRunningTime="2025-11-21 20:26:11.270089013 +0000 UTC m=+1176.456274067" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.408466 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.486549 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.649651 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.794103 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-scripts\") pod \"40f9cd99-3a9b-4287-9857-199b11d60281\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.794170 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-log-ovn\") pod \"40f9cd99-3a9b-4287-9857-199b11d60281\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.794206 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-additional-scripts\") pod \"40f9cd99-3a9b-4287-9857-199b11d60281\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.794264 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcr66\" (UniqueName: \"kubernetes.io/projected/40f9cd99-3a9b-4287-9857-199b11d60281-kube-api-access-jcr66\") pod \"40f9cd99-3a9b-4287-9857-199b11d60281\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.794363 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run-ovn\") pod \"40f9cd99-3a9b-4287-9857-199b11d60281\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.794450 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run\") pod \"40f9cd99-3a9b-4287-9857-199b11d60281\" (UID: \"40f9cd99-3a9b-4287-9857-199b11d60281\") " Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.794351 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "40f9cd99-3a9b-4287-9857-199b11d60281" (UID: "40f9cd99-3a9b-4287-9857-199b11d60281"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.794934 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run" (OuterVolumeSpecName: "var-run") pod "40f9cd99-3a9b-4287-9857-199b11d60281" (UID: "40f9cd99-3a9b-4287-9857-199b11d60281"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.795001 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "40f9cd99-3a9b-4287-9857-199b11d60281" (UID: "40f9cd99-3a9b-4287-9857-199b11d60281"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.794969 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "40f9cd99-3a9b-4287-9857-199b11d60281" (UID: "40f9cd99-3a9b-4287-9857-199b11d60281"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.795624 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-scripts" (OuterVolumeSpecName: "scripts") pod "40f9cd99-3a9b-4287-9857-199b11d60281" (UID: "40f9cd99-3a9b-4287-9857-199b11d60281"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.799552 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f9cd99-3a9b-4287-9857-199b11d60281-kube-api-access-jcr66" (OuterVolumeSpecName: "kube-api-access-jcr66") pod "40f9cd99-3a9b-4287-9857-199b11d60281" (UID: "40f9cd99-3a9b-4287-9857-199b11d60281"). InnerVolumeSpecName "kube-api-access-jcr66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.896288 4727 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.896321 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.896332 4727 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.896343 4727 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/40f9cd99-3a9b-4287-9857-199b11d60281-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.896355 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcr66\" (UniqueName: \"kubernetes.io/projected/40f9cd99-3a9b-4287-9857-199b11d60281-kube-api-access-jcr66\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:11 crc kubenswrapper[4727]: I1121 20:26:11.896366 4727 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/40f9cd99-3a9b-4287-9857-199b11d60281-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:12 crc kubenswrapper[4727]: I1121 20:26:12.046190 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-k8fk5" Nov 21 20:26:12 crc kubenswrapper[4727]: I1121 20:26:12.101396 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 21 20:26:12 crc kubenswrapper[4727]: I1121 20:26:12.251184 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2","Type":"ContainerStarted","Data":"f7b7478d69e0242bd443a2b513ebcb05abd957da742b56c7cffe6dc6aa83d149"} Nov 21 20:26:12 crc kubenswrapper[4727]: I1121 20:26:12.252224 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"ece081b40f8a159067cb17d30ab8e169be052f0e81c162a698e8ecebb6403bd8"} Nov 21 20:26:12 crc kubenswrapper[4727]: I1121 20:26:12.254197 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-k8fk5-config-vg7c2" event={"ID":"40f9cd99-3a9b-4287-9857-199b11d60281","Type":"ContainerDied","Data":"e74ba9b4aa88fd01c80777893070939c1cb4ceb914dbd5fa2fe5b60c2dff6b4e"} Nov 21 20:26:12 crc kubenswrapper[4727]: I1121 20:26:12.254223 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e74ba9b4aa88fd01c80777893070939c1cb4ceb914dbd5fa2fe5b60c2dff6b4e" Nov 21 20:26:12 crc kubenswrapper[4727]: I1121 20:26:12.254225 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-k8fk5-config-vg7c2" Nov 21 20:26:12 crc kubenswrapper[4727]: I1121 20:26:12.751703 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-k8fk5-config-vg7c2"] Nov 21 20:26:12 crc kubenswrapper[4727]: I1121 20:26:12.759674 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-k8fk5-config-vg7c2"] Nov 21 20:26:12 crc kubenswrapper[4727]: I1121 20:26:12.999167 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.094150 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.335641 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.335996 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.440025 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-xsqp9"] Nov 21 20:26:13 crc kubenswrapper[4727]: E1121 20:26:13.440537 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f9cd99-3a9b-4287-9857-199b11d60281" containerName="ovn-config" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.440561 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f9cd99-3a9b-4287-9857-199b11d60281" containerName="ovn-config" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.440826 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f9cd99-3a9b-4287-9857-199b11d60281" containerName="ovn-config" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.450946 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xsqp9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.454273 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-xsqp9"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.539658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d07fdc4d-df34-4241-8682-04203343eb9c-operator-scripts\") pod \"heat-db-create-xsqp9\" (UID: \"d07fdc4d-df34-4241-8682-04203343eb9c\") " pod="openstack/heat-db-create-xsqp9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.539842 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8rf7\" (UniqueName: \"kubernetes.io/projected/d07fdc4d-df34-4241-8682-04203343eb9c-kube-api-access-r8rf7\") pod \"heat-db-create-xsqp9\" (UID: \"d07fdc4d-df34-4241-8682-04203343eb9c\") " pod="openstack/heat-db-create-xsqp9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.567263 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f9cd99-3a9b-4287-9857-199b11d60281" path="/var/lib/kubelet/pods/40f9cd99-3a9b-4287-9857-199b11d60281/volumes" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.580036 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cwqhb"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.612289 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwqhb" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.616697 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cwqhb"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.642405 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8rf7\" (UniqueName: \"kubernetes.io/projected/d07fdc4d-df34-4241-8682-04203343eb9c-kube-api-access-r8rf7\") pod \"heat-db-create-xsqp9\" (UID: \"d07fdc4d-df34-4241-8682-04203343eb9c\") " pod="openstack/heat-db-create-xsqp9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.642548 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwmcg\" (UniqueName: \"kubernetes.io/projected/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-kube-api-access-bwmcg\") pod \"cinder-db-create-cwqhb\" (UID: \"53ad39e2-f848-4b8c-8493-b9a268c6ee5e\") " pod="openstack/cinder-db-create-cwqhb" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.642631 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-operator-scripts\") pod \"cinder-db-create-cwqhb\" (UID: \"53ad39e2-f848-4b8c-8493-b9a268c6ee5e\") " pod="openstack/cinder-db-create-cwqhb" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.642672 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d07fdc4d-df34-4241-8682-04203343eb9c-operator-scripts\") pod \"heat-db-create-xsqp9\" (UID: \"d07fdc4d-df34-4241-8682-04203343eb9c\") " pod="openstack/heat-db-create-xsqp9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.643787 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d07fdc4d-df34-4241-8682-04203343eb9c-operator-scripts\") pod \"heat-db-create-xsqp9\" (UID: \"d07fdc4d-df34-4241-8682-04203343eb9c\") " pod="openstack/heat-db-create-xsqp9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.657087 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-d6mq9"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.658826 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d6mq9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.702504 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8rf7\" (UniqueName: \"kubernetes.io/projected/d07fdc4d-df34-4241-8682-04203343eb9c-kube-api-access-r8rf7\") pod \"heat-db-create-xsqp9\" (UID: \"d07fdc4d-df34-4241-8682-04203343eb9c\") " pod="openstack/heat-db-create-xsqp9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.707058 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c2d3-account-create-dc5n4"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.708725 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c2d3-account-create-dc5n4" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.715703 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.730230 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c2d3-account-create-dc5n4"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.744102 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-operator-scripts\") pod \"cinder-db-create-cwqhb\" (UID: \"53ad39e2-f848-4b8c-8493-b9a268c6ee5e\") " pod="openstack/cinder-db-create-cwqhb" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.744221 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-operator-scripts\") pod \"cinder-c2d3-account-create-dc5n4\" (UID: \"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef\") " pod="openstack/cinder-c2d3-account-create-dc5n4" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.744266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9zf\" (UniqueName: \"kubernetes.io/projected/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-kube-api-access-ts9zf\") pod \"cinder-c2d3-account-create-dc5n4\" (UID: \"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef\") " pod="openstack/cinder-c2d3-account-create-dc5n4" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.744294 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05ba3e0c-281c-4d28-87ca-076a68130e0d-operator-scripts\") pod \"barbican-db-create-d6mq9\" (UID: \"05ba3e0c-281c-4d28-87ca-076a68130e0d\") " pod="openstack/barbican-db-create-d6mq9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.744354 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p7jr\" (UniqueName: \"kubernetes.io/projected/05ba3e0c-281c-4d28-87ca-076a68130e0d-kube-api-access-9p7jr\") pod \"barbican-db-create-d6mq9\" (UID: \"05ba3e0c-281c-4d28-87ca-076a68130e0d\") " pod="openstack/barbican-db-create-d6mq9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.744402 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwmcg\" (UniqueName: \"kubernetes.io/projected/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-kube-api-access-bwmcg\") pod \"cinder-db-create-cwqhb\" (UID: \"53ad39e2-f848-4b8c-8493-b9a268c6ee5e\") " pod="openstack/cinder-db-create-cwqhb" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.745008 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-operator-scripts\") pod \"cinder-db-create-cwqhb\" (UID: \"53ad39e2-f848-4b8c-8493-b9a268c6ee5e\") " pod="openstack/cinder-db-create-cwqhb" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.752799 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d6mq9"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.779830 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xsqp9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.802587 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwmcg\" (UniqueName: \"kubernetes.io/projected/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-kube-api-access-bwmcg\") pod \"cinder-db-create-cwqhb\" (UID: \"53ad39e2-f848-4b8c-8493-b9a268c6ee5e\") " pod="openstack/cinder-db-create-cwqhb" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.826444 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8d3c-account-create-zgpsn"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.827901 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d3c-account-create-zgpsn" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.839716 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.846257 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9zf\" (UniqueName: \"kubernetes.io/projected/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-kube-api-access-ts9zf\") pod \"cinder-c2d3-account-create-dc5n4\" (UID: \"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef\") " pod="openstack/cinder-c2d3-account-create-dc5n4" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.846468 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05ba3e0c-281c-4d28-87ca-076a68130e0d-operator-scripts\") pod \"barbican-db-create-d6mq9\" (UID: \"05ba3e0c-281c-4d28-87ca-076a68130e0d\") " pod="openstack/barbican-db-create-d6mq9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.846571 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p7jr\" (UniqueName: \"kubernetes.io/projected/05ba3e0c-281c-4d28-87ca-076a68130e0d-kube-api-access-9p7jr\") pod \"barbican-db-create-d6mq9\" (UID: \"05ba3e0c-281c-4d28-87ca-076a68130e0d\") " pod="openstack/barbican-db-create-d6mq9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.846793 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-operator-scripts\") pod \"cinder-c2d3-account-create-dc5n4\" (UID: \"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef\") " pod="openstack/cinder-c2d3-account-create-dc5n4" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.850083 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8d3c-account-create-zgpsn"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.860307 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05ba3e0c-281c-4d28-87ca-076a68130e0d-operator-scripts\") pod \"barbican-db-create-d6mq9\" (UID: \"05ba3e0c-281c-4d28-87ca-076a68130e0d\") " pod="openstack/barbican-db-create-d6mq9" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.875280 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-operator-scripts\") pod \"cinder-c2d3-account-create-dc5n4\" (UID: \"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef\") " pod="openstack/cinder-c2d3-account-create-dc5n4" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.907265 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-kwcbg"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.928082 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kwcbg" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.937624 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwqhb" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.948438 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzbt\" (UniqueName: \"kubernetes.io/projected/9f1bf400-e74d-4100-b45b-3586af918b21-kube-api-access-4xzbt\") pod \"barbican-8d3c-account-create-zgpsn\" (UID: \"9f1bf400-e74d-4100-b45b-3586af918b21\") " pod="openstack/barbican-8d3c-account-create-zgpsn" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.948645 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1bf400-e74d-4100-b45b-3586af918b21-operator-scripts\") pod \"barbican-8d3c-account-create-zgpsn\" (UID: \"9f1bf400-e74d-4100-b45b-3586af918b21\") " pod="openstack/barbican-8d3c-account-create-zgpsn" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.975909 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6fn75"] Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.977186 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.987311 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.987474 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.987571 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xgkf5" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.987705 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 20:26:13 crc kubenswrapper[4727]: I1121 20:26:13.997342 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kwcbg"] Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.007805 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3bfd-account-create-zvmnh"] Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.009067 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3bfd-account-create-zvmnh" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.012641 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.021490 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6fn75"] Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.049908 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6h98\" (UniqueName: \"kubernetes.io/projected/69b163ff-e145-4bac-b538-2db537ee665c-kube-api-access-h6h98\") pod \"heat-3bfd-account-create-zvmnh\" (UID: \"69b163ff-e145-4bac-b538-2db537ee665c\") " pod="openstack/heat-3bfd-account-create-zvmnh" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.050014 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1bf400-e74d-4100-b45b-3586af918b21-operator-scripts\") pod \"barbican-8d3c-account-create-zgpsn\" (UID: \"9f1bf400-e74d-4100-b45b-3586af918b21\") " pod="openstack/barbican-8d3c-account-create-zgpsn" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.050057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69b163ff-e145-4bac-b538-2db537ee665c-operator-scripts\") pod \"heat-3bfd-account-create-zvmnh\" (UID: \"69b163ff-e145-4bac-b538-2db537ee665c\") " pod="openstack/heat-3bfd-account-create-zvmnh" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.051026 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs7h8\" (UniqueName: \"kubernetes.io/projected/7566d068-bef7-4b58-8460-1e259bd2dd94-kube-api-access-rs7h8\") pod \"neutron-db-create-kwcbg\" (UID: \"7566d068-bef7-4b58-8460-1e259bd2dd94\") " pod="openstack/neutron-db-create-kwcbg" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.051076 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-config-data\") pod \"keystone-db-sync-6fn75\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.051099 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5fzg\" (UniqueName: \"kubernetes.io/projected/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-kube-api-access-h5fzg\") pod \"keystone-db-sync-6fn75\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.051121 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-combined-ca-bundle\") pod \"keystone-db-sync-6fn75\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.051180 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xzbt\" (UniqueName: \"kubernetes.io/projected/9f1bf400-e74d-4100-b45b-3586af918b21-kube-api-access-4xzbt\") pod \"barbican-8d3c-account-create-zgpsn\" (UID: \"9f1bf400-e74d-4100-b45b-3586af918b21\") " pod="openstack/barbican-8d3c-account-create-zgpsn" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.051195 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7566d068-bef7-4b58-8460-1e259bd2dd94-operator-scripts\") pod \"neutron-db-create-kwcbg\" (UID: \"7566d068-bef7-4b58-8460-1e259bd2dd94\") " pod="openstack/neutron-db-create-kwcbg" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.050140 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3bfd-account-create-zvmnh"] Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.050945 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1bf400-e74d-4100-b45b-3586af918b21-operator-scripts\") pod \"barbican-8d3c-account-create-zgpsn\" (UID: \"9f1bf400-e74d-4100-b45b-3586af918b21\") " pod="openstack/barbican-8d3c-account-create-zgpsn" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.077307 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0c83-account-create-6dpfx"] Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.078856 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0c83-account-create-6dpfx" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.080258 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0c83-account-create-6dpfx"] Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.083827 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.153032 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntzmz\" (UniqueName: \"kubernetes.io/projected/bd484d31-a5ee-44bc-8d30-fdc1819e0956-kube-api-access-ntzmz\") pod \"neutron-0c83-account-create-6dpfx\" (UID: \"bd484d31-a5ee-44bc-8d30-fdc1819e0956\") " pod="openstack/neutron-0c83-account-create-6dpfx" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.153110 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69b163ff-e145-4bac-b538-2db537ee665c-operator-scripts\") pod \"heat-3bfd-account-create-zvmnh\" (UID: \"69b163ff-e145-4bac-b538-2db537ee665c\") " pod="openstack/heat-3bfd-account-create-zvmnh" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.153166 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs7h8\" (UniqueName: \"kubernetes.io/projected/7566d068-bef7-4b58-8460-1e259bd2dd94-kube-api-access-rs7h8\") pod \"neutron-db-create-kwcbg\" (UID: \"7566d068-bef7-4b58-8460-1e259bd2dd94\") " pod="openstack/neutron-db-create-kwcbg" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.153221 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-config-data\") pod \"keystone-db-sync-6fn75\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.153252 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5fzg\" (UniqueName: \"kubernetes.io/projected/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-kube-api-access-h5fzg\") pod \"keystone-db-sync-6fn75\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.153284 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-combined-ca-bundle\") pod \"keystone-db-sync-6fn75\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.153372 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7566d068-bef7-4b58-8460-1e259bd2dd94-operator-scripts\") pod \"neutron-db-create-kwcbg\" (UID: \"7566d068-bef7-4b58-8460-1e259bd2dd94\") " pod="openstack/neutron-db-create-kwcbg" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.153437 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6h98\" (UniqueName: \"kubernetes.io/projected/69b163ff-e145-4bac-b538-2db537ee665c-kube-api-access-h6h98\") pod \"heat-3bfd-account-create-zvmnh\" (UID: \"69b163ff-e145-4bac-b538-2db537ee665c\") " pod="openstack/heat-3bfd-account-create-zvmnh" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.153464 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd484d31-a5ee-44bc-8d30-fdc1819e0956-operator-scripts\") pod \"neutron-0c83-account-create-6dpfx\" (UID: \"bd484d31-a5ee-44bc-8d30-fdc1819e0956\") " pod="openstack/neutron-0c83-account-create-6dpfx" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.154427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69b163ff-e145-4bac-b538-2db537ee665c-operator-scripts\") pod \"heat-3bfd-account-create-zvmnh\" (UID: \"69b163ff-e145-4bac-b538-2db537ee665c\") " pod="openstack/heat-3bfd-account-create-zvmnh" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.158175 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7566d068-bef7-4b58-8460-1e259bd2dd94-operator-scripts\") pod \"neutron-db-create-kwcbg\" (UID: \"7566d068-bef7-4b58-8460-1e259bd2dd94\") " pod="openstack/neutron-db-create-kwcbg" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.188904 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs7h8\" (UniqueName: \"kubernetes.io/projected/7566d068-bef7-4b58-8460-1e259bd2dd94-kube-api-access-rs7h8\") pod \"neutron-db-create-kwcbg\" (UID: \"7566d068-bef7-4b58-8460-1e259bd2dd94\") " pod="openstack/neutron-db-create-kwcbg" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.189432 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p7jr\" (UniqueName: \"kubernetes.io/projected/05ba3e0c-281c-4d28-87ca-076a68130e0d-kube-api-access-9p7jr\") pod \"barbican-db-create-d6mq9\" (UID: \"05ba3e0c-281c-4d28-87ca-076a68130e0d\") " pod="openstack/barbican-db-create-d6mq9" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.199691 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6h98\" (UniqueName: \"kubernetes.io/projected/69b163ff-e145-4bac-b538-2db537ee665c-kube-api-access-h6h98\") pod \"heat-3bfd-account-create-zvmnh\" (UID: \"69b163ff-e145-4bac-b538-2db537ee665c\") " pod="openstack/heat-3bfd-account-create-zvmnh" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.199693 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9zf\" (UniqueName: \"kubernetes.io/projected/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-kube-api-access-ts9zf\") pod \"cinder-c2d3-account-create-dc5n4\" (UID: \"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef\") " pod="openstack/cinder-c2d3-account-create-dc5n4" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.202318 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xzbt\" (UniqueName: \"kubernetes.io/projected/9f1bf400-e74d-4100-b45b-3586af918b21-kube-api-access-4xzbt\") pod \"barbican-8d3c-account-create-zgpsn\" (UID: \"9f1bf400-e74d-4100-b45b-3586af918b21\") " pod="openstack/barbican-8d3c-account-create-zgpsn" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.226114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-combined-ca-bundle\") pod \"keystone-db-sync-6fn75\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.226281 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-config-data\") pod \"keystone-db-sync-6fn75\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.227222 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d3c-account-create-zgpsn" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.263112 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd484d31-a5ee-44bc-8d30-fdc1819e0956-operator-scripts\") pod \"neutron-0c83-account-create-6dpfx\" (UID: \"bd484d31-a5ee-44bc-8d30-fdc1819e0956\") " pod="openstack/neutron-0c83-account-create-6dpfx" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.263220 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntzmz\" (UniqueName: \"kubernetes.io/projected/bd484d31-a5ee-44bc-8d30-fdc1819e0956-kube-api-access-ntzmz\") pod \"neutron-0c83-account-create-6dpfx\" (UID: \"bd484d31-a5ee-44bc-8d30-fdc1819e0956\") " pod="openstack/neutron-0c83-account-create-6dpfx" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.266571 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kwcbg" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.286401 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d6mq9" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.320322 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5fzg\" (UniqueName: \"kubernetes.io/projected/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-kube-api-access-h5fzg\") pod \"keystone-db-sync-6fn75\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.329012 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntzmz\" (UniqueName: \"kubernetes.io/projected/bd484d31-a5ee-44bc-8d30-fdc1819e0956-kube-api-access-ntzmz\") pod \"neutron-0c83-account-create-6dpfx\" (UID: \"bd484d31-a5ee-44bc-8d30-fdc1819e0956\") " pod="openstack/neutron-0c83-account-create-6dpfx" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.329715 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd484d31-a5ee-44bc-8d30-fdc1819e0956-operator-scripts\") pod \"neutron-0c83-account-create-6dpfx\" (UID: \"bd484d31-a5ee-44bc-8d30-fdc1819e0956\") " pod="openstack/neutron-0c83-account-create-6dpfx" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.329729 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.352494 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3bfd-account-create-zvmnh" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.401799 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c2d3-account-create-dc5n4" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.408047 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0c83-account-create-6dpfx" Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.553139 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-xsqp9"] Nov 21 20:26:14 crc kubenswrapper[4727]: I1121 20:26:14.829687 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cwqhb"] Nov 21 20:26:15 crc kubenswrapper[4727]: I1121 20:26:15.083342 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8d3c-account-create-zgpsn"] Nov 21 20:26:15 crc kubenswrapper[4727]: W1121 20:26:15.191612 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53ad39e2_f848_4b8c_8493_b9a268c6ee5e.slice/crio-dac91b58791c6de4afc35e9e99e87c8ac76d8bec0ad0f68a7cb57c2e86789335 WatchSource:0}: Error finding container dac91b58791c6de4afc35e9e99e87c8ac76d8bec0ad0f68a7cb57c2e86789335: Status 404 returned error can't find the container with id dac91b58791c6de4afc35e9e99e87c8ac76d8bec0ad0f68a7cb57c2e86789335 Nov 21 20:26:15 crc kubenswrapper[4727]: I1121 20:26:15.361739 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d3c-account-create-zgpsn" event={"ID":"9f1bf400-e74d-4100-b45b-3586af918b21","Type":"ContainerStarted","Data":"53afe800cdc05a704f2ffdbf12f710cbcad375126176a7209472878cbec31114"} Nov 21 20:26:15 crc kubenswrapper[4727]: I1121 20:26:15.387315 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xsqp9" event={"ID":"d07fdc4d-df34-4241-8682-04203343eb9c","Type":"ContainerStarted","Data":"fb0263af4b49b90211f931a7febbb633fe08ddd811775f03a662f9989c05a2e1"} Nov 21 20:26:15 crc kubenswrapper[4727]: I1121 20:26:15.399193 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2","Type":"ContainerStarted","Data":"7ecd50091a7f64ba760e1a638400d812251034cd39c4a85cc524f69589ca467b"} Nov 21 20:26:15 crc kubenswrapper[4727]: I1121 20:26:15.404881 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cwqhb" event={"ID":"53ad39e2-f848-4b8c-8493-b9a268c6ee5e","Type":"ContainerStarted","Data":"dac91b58791c6de4afc35e9e99e87c8ac76d8bec0ad0f68a7cb57c2e86789335"} Nov 21 20:26:15 crc kubenswrapper[4727]: I1121 20:26:15.623396 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d6mq9"] Nov 21 20:26:15 crc kubenswrapper[4727]: I1121 20:26:15.781224 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kwcbg"] Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.228090 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6fn75"] Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.275429 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c2d3-account-create-dc5n4"] Nov 21 20:26:16 crc kubenswrapper[4727]: W1121 20:26:16.328440 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69b163ff_e145_4bac_b538_2db537ee665c.slice/crio-fc1f28bc3a494fbfa4c3b0a76e4e800053f74fbc433199fa249a7480aece860f WatchSource:0}: Error finding container fc1f28bc3a494fbfa4c3b0a76e4e800053f74fbc433199fa249a7480aece860f: Status 404 returned error can't find the container with id fc1f28bc3a494fbfa4c3b0a76e4e800053f74fbc433199fa249a7480aece860f Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.342522 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3bfd-account-create-zvmnh"] Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.420250 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3bfd-account-create-zvmnh" event={"ID":"69b163ff-e145-4bac-b538-2db537ee665c","Type":"ContainerStarted","Data":"fc1f28bc3a494fbfa4c3b0a76e4e800053f74fbc433199fa249a7480aece860f"} Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.422091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6fn75" event={"ID":"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7","Type":"ContainerStarted","Data":"88014f48306e5593aa63bc16dc09590403f6b1f98ebecf435bd432183cb1ad3d"} Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.423908 4727 generic.go:334] "Generic (PLEG): container finished" podID="9f1bf400-e74d-4100-b45b-3586af918b21" containerID="0d60ffd4baadea167764a24b3ff98ce7c1827fa5bd2ffcc0e9fa20fbffaa6a57" exitCode=0 Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.424005 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d3c-account-create-zgpsn" event={"ID":"9f1bf400-e74d-4100-b45b-3586af918b21","Type":"ContainerDied","Data":"0d60ffd4baadea167764a24b3ff98ce7c1827fa5bd2ffcc0e9fa20fbffaa6a57"} Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.428286 4727 generic.go:334] "Generic (PLEG): container finished" podID="53ad39e2-f848-4b8c-8493-b9a268c6ee5e" containerID="9d7b2a8ca81fc996f4901b82d529ac48b5ee85a84aac5d2abdb470d24e62b061" exitCode=0 Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.428341 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cwqhb" event={"ID":"53ad39e2-f848-4b8c-8493-b9a268c6ee5e","Type":"ContainerDied","Data":"9d7b2a8ca81fc996f4901b82d529ac48b5ee85a84aac5d2abdb470d24e62b061"} Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.429263 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kwcbg" event={"ID":"7566d068-bef7-4b58-8460-1e259bd2dd94","Type":"ContainerStarted","Data":"bc53eef494d5e3236d106bcd5657d6ecdd10254c0146da99f4685ed05a923dfa"} Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.430136 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c2d3-account-create-dc5n4" event={"ID":"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef","Type":"ContainerStarted","Data":"d60ba3aed745c421575bdef6a8ba768d96c97975ae3c482d10b93a25dfba3e09"} Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.434357 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xsqp9" event={"ID":"d07fdc4d-df34-4241-8682-04203343eb9c","Type":"ContainerStarted","Data":"8bab53af9863844650046c425bae3f1c68b825ac097ea985ecb8cb85f507526b"} Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.443686 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d6mq9" event={"ID":"05ba3e0c-281c-4d28-87ca-076a68130e0d","Type":"ContainerStarted","Data":"483f4ea09f739b43c292ede1be6d8c62c7e79a1356be14891c9f19fd59a45d54"} Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.446063 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"6cae8753e34ae627dda0fed96d2c47a8f33b7d39c79781ce54a9d54a9241a392"} Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.471992 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0c83-account-create-6dpfx"] Nov 21 20:26:16 crc kubenswrapper[4727]: I1121 20:26:16.495235 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-xsqp9" podStartSLOduration=3.495214266 podStartE2EDuration="3.495214266s" podCreationTimestamp="2025-11-21 20:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:16.471366293 +0000 UTC m=+1181.657551337" watchObservedRunningTime="2025-11-21 20:26:16.495214266 +0000 UTC m=+1181.681399300" Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.461551 4727 generic.go:334] "Generic (PLEG): container finished" podID="d07fdc4d-df34-4241-8682-04203343eb9c" containerID="8bab53af9863844650046c425bae3f1c68b825ac097ea985ecb8cb85f507526b" exitCode=0 Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.461668 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xsqp9" event={"ID":"d07fdc4d-df34-4241-8682-04203343eb9c","Type":"ContainerDied","Data":"8bab53af9863844650046c425bae3f1c68b825ac097ea985ecb8cb85f507526b"} Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.465252 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0c83-account-create-6dpfx" event={"ID":"bd484d31-a5ee-44bc-8d30-fdc1819e0956","Type":"ContainerStarted","Data":"edf7f97ab1e8a2cddfd5ac7628a6e2e9efe4426905d83dc27a199b00faf65bbf"} Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.465361 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0c83-account-create-6dpfx" event={"ID":"bd484d31-a5ee-44bc-8d30-fdc1819e0956","Type":"ContainerStarted","Data":"9a0a703953610d79149df38c534ee95e889dfae46f65284abda8d4120a9d6d4a"} Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.467999 4727 generic.go:334] "Generic (PLEG): container finished" podID="05ba3e0c-281c-4d28-87ca-076a68130e0d" containerID="b9851a880932ebbd7635cec268176d895b5a617d4a2a5fb811671b4c5a686eac" exitCode=0 Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.468091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d6mq9" event={"ID":"05ba3e0c-281c-4d28-87ca-076a68130e0d","Type":"ContainerDied","Data":"b9851a880932ebbd7635cec268176d895b5a617d4a2a5fb811671b4c5a686eac"} Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.472768 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"1d7144f104f8d90b4a7b0616cd221b84f1188e1d8a22b7c8b09c69bf57356b3a"} Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.472831 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"f3359aa1f7459373c415a47fa8bc996ec032261a0b337c224ac113308ee07ea6"} Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.472842 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"0ebef5d9d71e5dc4e573f740691cd337210c066ad1deefd5b401bc7b3aa3de0f"} Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.474764 4727 generic.go:334] "Generic (PLEG): container finished" podID="7566d068-bef7-4b58-8460-1e259bd2dd94" containerID="ae120e2c0262958c2447f538e8531453f8eed10f0b30166d9de72ca2c124f55c" exitCode=0 Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.474817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kwcbg" event={"ID":"7566d068-bef7-4b58-8460-1e259bd2dd94","Type":"ContainerDied","Data":"ae120e2c0262958c2447f538e8531453f8eed10f0b30166d9de72ca2c124f55c"} Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.477663 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3bfd-account-create-zvmnh" event={"ID":"69b163ff-e145-4bac-b538-2db537ee665c","Type":"ContainerStarted","Data":"775d52e0a511a5ef23fd1aa36b47eecd02b260c0cabcde4c0e6718efb156784d"} Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.480154 4727 generic.go:334] "Generic (PLEG): container finished" podID="a9e530a2-cd2a-484f-87b8-4e4ef966b4ef" containerID="25deea902644530b5a73b1611f26e6eb80405b74f609796d0f90194c68ab45db" exitCode=0 Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.480277 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c2d3-account-create-dc5n4" event={"ID":"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef","Type":"ContainerDied","Data":"25deea902644530b5a73b1611f26e6eb80405b74f609796d0f90194c68ab45db"} Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.510714 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-0c83-account-create-6dpfx" podStartSLOduration=4.510692429 podStartE2EDuration="4.510692429s" podCreationTimestamp="2025-11-21 20:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:17.499195274 +0000 UTC m=+1182.685380318" watchObservedRunningTime="2025-11-21 20:26:17.510692429 +0000 UTC m=+1182.696877473" Nov 21 20:26:17 crc kubenswrapper[4727]: I1121 20:26:17.593766 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-3bfd-account-create-zvmnh" podStartSLOduration=4.593747994 podStartE2EDuration="4.593747994s" podCreationTimestamp="2025-11-21 20:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:17.592293038 +0000 UTC m=+1182.778478072" watchObservedRunningTime="2025-11-21 20:26:17.593747994 +0000 UTC m=+1182.779933038" Nov 21 20:26:18 crc kubenswrapper[4727]: I1121 20:26:18.490194 4727 generic.go:334] "Generic (PLEG): container finished" podID="bd484d31-a5ee-44bc-8d30-fdc1819e0956" containerID="edf7f97ab1e8a2cddfd5ac7628a6e2e9efe4426905d83dc27a199b00faf65bbf" exitCode=0 Nov 21 20:26:18 crc kubenswrapper[4727]: I1121 20:26:18.490306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0c83-account-create-6dpfx" event={"ID":"bd484d31-a5ee-44bc-8d30-fdc1819e0956","Type":"ContainerDied","Data":"edf7f97ab1e8a2cddfd5ac7628a6e2e9efe4426905d83dc27a199b00faf65bbf"} Nov 21 20:26:18 crc kubenswrapper[4727]: I1121 20:26:18.494733 4727 generic.go:334] "Generic (PLEG): container finished" podID="69b163ff-e145-4bac-b538-2db537ee665c" containerID="775d52e0a511a5ef23fd1aa36b47eecd02b260c0cabcde4c0e6718efb156784d" exitCode=0 Nov 21 20:26:18 crc kubenswrapper[4727]: I1121 20:26:18.494826 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3bfd-account-create-zvmnh" event={"ID":"69b163ff-e145-4bac-b538-2db537ee665c","Type":"ContainerDied","Data":"775d52e0a511a5ef23fd1aa36b47eecd02b260c0cabcde4c0e6718efb156784d"} Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.250345 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d3c-account-create-zgpsn" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.260650 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwqhb" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.280579 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kwcbg" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.305859 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xsqp9" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.312530 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d6mq9" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.332625 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c2d3-account-create-dc5n4" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.380342 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-operator-scripts\") pod \"53ad39e2-f848-4b8c-8493-b9a268c6ee5e\" (UID: \"53ad39e2-f848-4b8c-8493-b9a268c6ee5e\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.380401 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1bf400-e74d-4100-b45b-3586af918b21-operator-scripts\") pod \"9f1bf400-e74d-4100-b45b-3586af918b21\" (UID: \"9f1bf400-e74d-4100-b45b-3586af918b21\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.380433 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05ba3e0c-281c-4d28-87ca-076a68130e0d-operator-scripts\") pod \"05ba3e0c-281c-4d28-87ca-076a68130e0d\" (UID: \"05ba3e0c-281c-4d28-87ca-076a68130e0d\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.380468 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs7h8\" (UniqueName: \"kubernetes.io/projected/7566d068-bef7-4b58-8460-1e259bd2dd94-kube-api-access-rs7h8\") pod \"7566d068-bef7-4b58-8460-1e259bd2dd94\" (UID: \"7566d068-bef7-4b58-8460-1e259bd2dd94\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.380509 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7566d068-bef7-4b58-8460-1e259bd2dd94-operator-scripts\") pod \"7566d068-bef7-4b58-8460-1e259bd2dd94\" (UID: \"7566d068-bef7-4b58-8460-1e259bd2dd94\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.380594 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p7jr\" (UniqueName: \"kubernetes.io/projected/05ba3e0c-281c-4d28-87ca-076a68130e0d-kube-api-access-9p7jr\") pod \"05ba3e0c-281c-4d28-87ca-076a68130e0d\" (UID: \"05ba3e0c-281c-4d28-87ca-076a68130e0d\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.380613 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xzbt\" (UniqueName: \"kubernetes.io/projected/9f1bf400-e74d-4100-b45b-3586af918b21-kube-api-access-4xzbt\") pod \"9f1bf400-e74d-4100-b45b-3586af918b21\" (UID: \"9f1bf400-e74d-4100-b45b-3586af918b21\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.380844 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwmcg\" (UniqueName: \"kubernetes.io/projected/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-kube-api-access-bwmcg\") pod \"53ad39e2-f848-4b8c-8493-b9a268c6ee5e\" (UID: \"53ad39e2-f848-4b8c-8493-b9a268c6ee5e\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.380866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d07fdc4d-df34-4241-8682-04203343eb9c-operator-scripts\") pod \"d07fdc4d-df34-4241-8682-04203343eb9c\" (UID: \"d07fdc4d-df34-4241-8682-04203343eb9c\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.380943 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8rf7\" (UniqueName: \"kubernetes.io/projected/d07fdc4d-df34-4241-8682-04203343eb9c-kube-api-access-r8rf7\") pod \"d07fdc4d-df34-4241-8682-04203343eb9c\" (UID: \"d07fdc4d-df34-4241-8682-04203343eb9c\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.383136 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7566d068-bef7-4b58-8460-1e259bd2dd94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7566d068-bef7-4b58-8460-1e259bd2dd94" (UID: "7566d068-bef7-4b58-8460-1e259bd2dd94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.385414 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f1bf400-e74d-4100-b45b-3586af918b21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f1bf400-e74d-4100-b45b-3586af918b21" (UID: "9f1bf400-e74d-4100-b45b-3586af918b21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.385795 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53ad39e2-f848-4b8c-8493-b9a268c6ee5e" (UID: "53ad39e2-f848-4b8c-8493-b9a268c6ee5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.388361 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07fdc4d-df34-4241-8682-04203343eb9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d07fdc4d-df34-4241-8682-04203343eb9c" (UID: "d07fdc4d-df34-4241-8682-04203343eb9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.390384 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05ba3e0c-281c-4d28-87ca-076a68130e0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05ba3e0c-281c-4d28-87ca-076a68130e0d" (UID: "05ba3e0c-281c-4d28-87ca-076a68130e0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.391119 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1bf400-e74d-4100-b45b-3586af918b21-kube-api-access-4xzbt" (OuterVolumeSpecName: "kube-api-access-4xzbt") pod "9f1bf400-e74d-4100-b45b-3586af918b21" (UID: "9f1bf400-e74d-4100-b45b-3586af918b21"). InnerVolumeSpecName "kube-api-access-4xzbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.393280 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-kube-api-access-bwmcg" (OuterVolumeSpecName: "kube-api-access-bwmcg") pod "53ad39e2-f848-4b8c-8493-b9a268c6ee5e" (UID: "53ad39e2-f848-4b8c-8493-b9a268c6ee5e"). InnerVolumeSpecName "kube-api-access-bwmcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.398110 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7566d068-bef7-4b58-8460-1e259bd2dd94-kube-api-access-rs7h8" (OuterVolumeSpecName: "kube-api-access-rs7h8") pod "7566d068-bef7-4b58-8460-1e259bd2dd94" (UID: "7566d068-bef7-4b58-8460-1e259bd2dd94"). InnerVolumeSpecName "kube-api-access-rs7h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.409096 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07fdc4d-df34-4241-8682-04203343eb9c-kube-api-access-r8rf7" (OuterVolumeSpecName: "kube-api-access-r8rf7") pod "d07fdc4d-df34-4241-8682-04203343eb9c" (UID: "d07fdc4d-df34-4241-8682-04203343eb9c"). InnerVolumeSpecName "kube-api-access-r8rf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.410608 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ba3e0c-281c-4d28-87ca-076a68130e0d-kube-api-access-9p7jr" (OuterVolumeSpecName: "kube-api-access-9p7jr") pod "05ba3e0c-281c-4d28-87ca-076a68130e0d" (UID: "05ba3e0c-281c-4d28-87ca-076a68130e0d"). InnerVolumeSpecName "kube-api-access-9p7jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.484400 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts9zf\" (UniqueName: \"kubernetes.io/projected/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-kube-api-access-ts9zf\") pod \"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef\" (UID: \"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.484760 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-operator-scripts\") pod \"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef\" (UID: \"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef\") " Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.485981 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.486004 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f1bf400-e74d-4100-b45b-3586af918b21-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.486015 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05ba3e0c-281c-4d28-87ca-076a68130e0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.486025 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs7h8\" (UniqueName: \"kubernetes.io/projected/7566d068-bef7-4b58-8460-1e259bd2dd94-kube-api-access-rs7h8\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.486037 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7566d068-bef7-4b58-8460-1e259bd2dd94-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.486051 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xzbt\" (UniqueName: \"kubernetes.io/projected/9f1bf400-e74d-4100-b45b-3586af918b21-kube-api-access-4xzbt\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.486059 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p7jr\" (UniqueName: \"kubernetes.io/projected/05ba3e0c-281c-4d28-87ca-076a68130e0d-kube-api-access-9p7jr\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.486068 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwmcg\" (UniqueName: \"kubernetes.io/projected/53ad39e2-f848-4b8c-8493-b9a268c6ee5e-kube-api-access-bwmcg\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.486078 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d07fdc4d-df34-4241-8682-04203343eb9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.486086 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8rf7\" (UniqueName: \"kubernetes.io/projected/d07fdc4d-df34-4241-8682-04203343eb9c-kube-api-access-r8rf7\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.486652 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9e530a2-cd2a-484f-87b8-4e4ef966b4ef" (UID: "a9e530a2-cd2a-484f-87b8-4e4ef966b4ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.490109 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-kube-api-access-ts9zf" (OuterVolumeSpecName: "kube-api-access-ts9zf") pod "a9e530a2-cd2a-484f-87b8-4e4ef966b4ef" (UID: "a9e530a2-cd2a-484f-87b8-4e4ef966b4ef"). InnerVolumeSpecName "kube-api-access-ts9zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.562568 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c2d3-account-create-dc5n4" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.576728 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xsqp9" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.583662 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d3c-account-create-zgpsn" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.589197 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d6mq9" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.602380 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c2d3-account-create-dc5n4" event={"ID":"a9e530a2-cd2a-484f-87b8-4e4ef966b4ef","Type":"ContainerDied","Data":"d60ba3aed745c421575bdef6a8ba768d96c97975ae3c482d10b93a25dfba3e09"} Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.603333 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d60ba3aed745c421575bdef6a8ba768d96c97975ae3c482d10b93a25dfba3e09" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.603365 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xsqp9" event={"ID":"d07fdc4d-df34-4241-8682-04203343eb9c","Type":"ContainerDied","Data":"fb0263af4b49b90211f931a7febbb633fe08ddd811775f03a662f9989c05a2e1"} Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.603379 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0263af4b49b90211f931a7febbb633fe08ddd811775f03a662f9989c05a2e1" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.603390 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d3c-account-create-zgpsn" event={"ID":"9f1bf400-e74d-4100-b45b-3586af918b21","Type":"ContainerDied","Data":"53afe800cdc05a704f2ffdbf12f710cbcad375126176a7209472878cbec31114"} Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.603401 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53afe800cdc05a704f2ffdbf12f710cbcad375126176a7209472878cbec31114" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.603409 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d6mq9" event={"ID":"05ba3e0c-281c-4d28-87ca-076a68130e0d","Type":"ContainerDied","Data":"483f4ea09f739b43c292ede1be6d8c62c7e79a1356be14891c9f19fd59a45d54"} Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.604716 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="483f4ea09f739b43c292ede1be6d8c62c7e79a1356be14891c9f19fd59a45d54" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.608934 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts9zf\" (UniqueName: \"kubernetes.io/projected/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-kube-api-access-ts9zf\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.608981 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.621228 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cwqhb" event={"ID":"53ad39e2-f848-4b8c-8493-b9a268c6ee5e","Type":"ContainerDied","Data":"dac91b58791c6de4afc35e9e99e87c8ac76d8bec0ad0f68a7cb57c2e86789335"} Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.621293 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dac91b58791c6de4afc35e9e99e87c8ac76d8bec0ad0f68a7cb57c2e86789335" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.621423 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cwqhb" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.627332 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kwcbg" Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.627476 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kwcbg" event={"ID":"7566d068-bef7-4b58-8460-1e259bd2dd94","Type":"ContainerDied","Data":"bc53eef494d5e3236d106bcd5657d6ecdd10254c0146da99f4685ed05a923dfa"} Nov 21 20:26:19 crc kubenswrapper[4727]: I1121 20:26:19.627574 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc53eef494d5e3236d106bcd5657d6ecdd10254c0146da99f4685ed05a923dfa" Nov 21 20:26:20 crc kubenswrapper[4727]: I1121 20:26:20.641132 4727 generic.go:334] "Generic (PLEG): container finished" podID="ff5d12dc-e65b-41f0-b29b-5c4eea0fada2" containerID="7ecd50091a7f64ba760e1a638400d812251034cd39c4a85cc524f69589ca467b" exitCode=0 Nov 21 20:26:20 crc kubenswrapper[4727]: I1121 20:26:20.641215 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2","Type":"ContainerDied","Data":"7ecd50091a7f64ba760e1a638400d812251034cd39c4a85cc524f69589ca467b"} Nov 21 20:26:22 crc kubenswrapper[4727]: I1121 20:26:22.669477 4727 generic.go:334] "Generic (PLEG): container finished" podID="e0672730-e181-488f-8472-a20c75dcb285" containerID="5b39c31d65eca4802f74d79f0758849ca806f2ae4dd64e00ff1bec14fdeedf2e" exitCode=0 Nov 21 20:26:22 crc kubenswrapper[4727]: I1121 20:26:22.670085 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rpv9s" event={"ID":"e0672730-e181-488f-8472-a20c75dcb285","Type":"ContainerDied","Data":"5b39c31d65eca4802f74d79f0758849ca806f2ae4dd64e00ff1bec14fdeedf2e"} Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.187941 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3bfd-account-create-zvmnh" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.199501 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0c83-account-create-6dpfx" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.289942 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6h98\" (UniqueName: \"kubernetes.io/projected/69b163ff-e145-4bac-b538-2db537ee665c-kube-api-access-h6h98\") pod \"69b163ff-e145-4bac-b538-2db537ee665c\" (UID: \"69b163ff-e145-4bac-b538-2db537ee665c\") " Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.290540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd484d31-a5ee-44bc-8d30-fdc1819e0956-operator-scripts\") pod \"bd484d31-a5ee-44bc-8d30-fdc1819e0956\" (UID: \"bd484d31-a5ee-44bc-8d30-fdc1819e0956\") " Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.290804 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69b163ff-e145-4bac-b538-2db537ee665c-operator-scripts\") pod \"69b163ff-e145-4bac-b538-2db537ee665c\" (UID: \"69b163ff-e145-4bac-b538-2db537ee665c\") " Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.290902 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntzmz\" (UniqueName: \"kubernetes.io/projected/bd484d31-a5ee-44bc-8d30-fdc1819e0956-kube-api-access-ntzmz\") pod \"bd484d31-a5ee-44bc-8d30-fdc1819e0956\" (UID: \"bd484d31-a5ee-44bc-8d30-fdc1819e0956\") " Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.292992 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69b163ff-e145-4bac-b538-2db537ee665c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69b163ff-e145-4bac-b538-2db537ee665c" (UID: "69b163ff-e145-4bac-b538-2db537ee665c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.293280 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd484d31-a5ee-44bc-8d30-fdc1819e0956-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd484d31-a5ee-44bc-8d30-fdc1819e0956" (UID: "bd484d31-a5ee-44bc-8d30-fdc1819e0956"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.296537 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd484d31-a5ee-44bc-8d30-fdc1819e0956-kube-api-access-ntzmz" (OuterVolumeSpecName: "kube-api-access-ntzmz") pod "bd484d31-a5ee-44bc-8d30-fdc1819e0956" (UID: "bd484d31-a5ee-44bc-8d30-fdc1819e0956"). InnerVolumeSpecName "kube-api-access-ntzmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.296930 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b163ff-e145-4bac-b538-2db537ee665c-kube-api-access-h6h98" (OuterVolumeSpecName: "kube-api-access-h6h98") pod "69b163ff-e145-4bac-b538-2db537ee665c" (UID: "69b163ff-e145-4bac-b538-2db537ee665c"). InnerVolumeSpecName "kube-api-access-h6h98". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.393363 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6h98\" (UniqueName: \"kubernetes.io/projected/69b163ff-e145-4bac-b538-2db537ee665c-kube-api-access-h6h98\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.393405 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd484d31-a5ee-44bc-8d30-fdc1819e0956-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.393416 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69b163ff-e145-4bac-b538-2db537ee665c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.393425 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntzmz\" (UniqueName: \"kubernetes.io/projected/bd484d31-a5ee-44bc-8d30-fdc1819e0956-kube-api-access-ntzmz\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.709453 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3bfd-account-create-zvmnh" event={"ID":"69b163ff-e145-4bac-b538-2db537ee665c","Type":"ContainerDied","Data":"fc1f28bc3a494fbfa4c3b0a76e4e800053f74fbc433199fa249a7480aece860f"} Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.709846 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1f28bc3a494fbfa4c3b0a76e4e800053f74fbc433199fa249a7480aece860f" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.709489 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3bfd-account-create-zvmnh" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.712474 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0c83-account-create-6dpfx" Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.713216 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0c83-account-create-6dpfx" event={"ID":"bd484d31-a5ee-44bc-8d30-fdc1819e0956","Type":"ContainerDied","Data":"9a0a703953610d79149df38c534ee95e889dfae46f65284abda8d4120a9d6d4a"} Nov 21 20:26:23 crc kubenswrapper[4727]: I1121 20:26:23.713260 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a0a703953610d79149df38c534ee95e889dfae46f65284abda8d4120a9d6d4a" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.249371 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rpv9s" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.322890 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-combined-ca-bundle\") pod \"e0672730-e181-488f-8472-a20c75dcb285\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.322949 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f7vr\" (UniqueName: \"kubernetes.io/projected/e0672730-e181-488f-8472-a20c75dcb285-kube-api-access-2f7vr\") pod \"e0672730-e181-488f-8472-a20c75dcb285\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.323039 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-config-data\") pod \"e0672730-e181-488f-8472-a20c75dcb285\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.323123 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-db-sync-config-data\") pod \"e0672730-e181-488f-8472-a20c75dcb285\" (UID: \"e0672730-e181-488f-8472-a20c75dcb285\") " Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.352012 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e0672730-e181-488f-8472-a20c75dcb285" (UID: "e0672730-e181-488f-8472-a20c75dcb285"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.364699 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0672730-e181-488f-8472-a20c75dcb285-kube-api-access-2f7vr" (OuterVolumeSpecName: "kube-api-access-2f7vr") pod "e0672730-e181-488f-8472-a20c75dcb285" (UID: "e0672730-e181-488f-8472-a20c75dcb285"). InnerVolumeSpecName "kube-api-access-2f7vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.425047 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.425079 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f7vr\" (UniqueName: \"kubernetes.io/projected/e0672730-e181-488f-8472-a20c75dcb285-kube-api-access-2f7vr\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.433343 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0672730-e181-488f-8472-a20c75dcb285" (UID: "e0672730-e181-488f-8472-a20c75dcb285"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.486146 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-config-data" (OuterVolumeSpecName: "config-data") pod "e0672730-e181-488f-8472-a20c75dcb285" (UID: "e0672730-e181-488f-8472-a20c75dcb285"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.532591 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.532621 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0672730-e181-488f-8472-a20c75dcb285-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.722012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2","Type":"ContainerStarted","Data":"39cb89b924a76a52523ecbca5b470a8255db0dbf4f051b2b1e6e6c4712ef13cc"} Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.724835 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"ef6513c150bdc302a4e302c27ce2d702bbbb50171d7326ba01c288109c133c68"} Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.724865 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"514aa3dc1b6d9ef21b6007ee9b03ceaab991b4d2c7903dc2759c3c4c3f7f9828"} Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.724878 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"89cadf90a9c2c2bb2b8b861e5b40f940d966a75fa388d362d7b0fd079ffdd374"} Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.726624 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6fn75" event={"ID":"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7","Type":"ContainerStarted","Data":"32fede56cc063c09059cc4dc687d122db2760c62019b2b140477fb00117876a6"} Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.730914 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rpv9s" event={"ID":"e0672730-e181-488f-8472-a20c75dcb285","Type":"ContainerDied","Data":"43635039285fa40bbed51545a6aac04365ccf5445cb08aa55fee7660bf4d089f"} Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.730943 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43635039285fa40bbed51545a6aac04365ccf5445cb08aa55fee7660bf4d089f" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.731083 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rpv9s" Nov 21 20:26:24 crc kubenswrapper[4727]: I1121 20:26:24.753838 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6fn75" podStartSLOduration=4.44190915 podStartE2EDuration="11.753820178s" podCreationTimestamp="2025-11-21 20:26:13 +0000 UTC" firstStartedPulling="2025-11-21 20:26:16.242443262 +0000 UTC m=+1181.428628306" lastFinishedPulling="2025-11-21 20:26:23.55435429 +0000 UTC m=+1188.740539334" observedRunningTime="2025-11-21 20:26:24.74905719 +0000 UTC m=+1189.935242234" watchObservedRunningTime="2025-11-21 20:26:24.753820178 +0000 UTC m=+1189.940005212" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.050859 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.277922 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gx5d7"] Nov 21 20:26:25 crc kubenswrapper[4727]: E1121 20:26:25.278661 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0672730-e181-488f-8472-a20c75dcb285" containerName="glance-db-sync" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.278683 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0672730-e181-488f-8472-a20c75dcb285" containerName="glance-db-sync" Nov 21 20:26:25 crc kubenswrapper[4727]: E1121 20:26:25.278698 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7566d068-bef7-4b58-8460-1e259bd2dd94" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.278705 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7566d068-bef7-4b58-8460-1e259bd2dd94" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: E1121 20:26:25.278722 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1bf400-e74d-4100-b45b-3586af918b21" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.278729 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1bf400-e74d-4100-b45b-3586af918b21" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: E1121 20:26:25.278745 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b163ff-e145-4bac-b538-2db537ee665c" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.278750 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b163ff-e145-4bac-b538-2db537ee665c" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: E1121 20:26:25.278759 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ad39e2-f848-4b8c-8493-b9a268c6ee5e" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.278765 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ad39e2-f848-4b8c-8493-b9a268c6ee5e" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: E1121 20:26:25.278784 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07fdc4d-df34-4241-8682-04203343eb9c" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.278791 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07fdc4d-df34-4241-8682-04203343eb9c" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: E1121 20:26:25.278805 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd484d31-a5ee-44bc-8d30-fdc1819e0956" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.278811 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd484d31-a5ee-44bc-8d30-fdc1819e0956" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: E1121 20:26:25.278822 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ba3e0c-281c-4d28-87ca-076a68130e0d" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.278828 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ba3e0c-281c-4d28-87ca-076a68130e0d" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: E1121 20:26:25.278844 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e530a2-cd2a-484f-87b8-4e4ef966b4ef" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.278849 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e530a2-cd2a-484f-87b8-4e4ef966b4ef" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.279047 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd484d31-a5ee-44bc-8d30-fdc1819e0956" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.279061 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ba3e0c-281c-4d28-87ca-076a68130e0d" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.279071 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7566d068-bef7-4b58-8460-1e259bd2dd94" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.279081 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e530a2-cd2a-484f-87b8-4e4ef966b4ef" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.279092 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1bf400-e74d-4100-b45b-3586af918b21" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.279108 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07fdc4d-df34-4241-8682-04203343eb9c" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.279119 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b163ff-e145-4bac-b538-2db537ee665c" containerName="mariadb-account-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.279133 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ad39e2-f848-4b8c-8493-b9a268c6ee5e" containerName="mariadb-database-create" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.279144 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0672730-e181-488f-8472-a20c75dcb285" containerName="glance-db-sync" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.280172 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.307635 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gx5d7"] Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.349186 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftjsl\" (UniqueName: \"kubernetes.io/projected/b7e68562-155f-4f92-b444-bf5c59e99024-kube-api-access-ftjsl\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.349271 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.349308 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.349333 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-dns-svc\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.349397 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-config\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.453357 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.453459 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-dns-svc\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.453569 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-config\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.453682 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftjsl\" (UniqueName: \"kubernetes.io/projected/b7e68562-155f-4f92-b444-bf5c59e99024-kube-api-access-ftjsl\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.453740 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.455056 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.455157 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-dns-svc\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.455288 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.455617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-config\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.487605 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftjsl\" (UniqueName: \"kubernetes.io/projected/b7e68562-155f-4f92-b444-bf5c59e99024-kube-api-access-ftjsl\") pod \"dnsmasq-dns-74dc88fc-gx5d7\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.611551 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:25 crc kubenswrapper[4727]: I1121 20:26:25.770063 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"9645887fa06c4ed8128752cd78bf383514b067f334d290acb03d0495d2cc4f56"} Nov 21 20:26:26 crc kubenswrapper[4727]: I1121 20:26:26.191146 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gx5d7"] Nov 21 20:26:26 crc kubenswrapper[4727]: I1121 20:26:26.786751 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" event={"ID":"b7e68562-155f-4f92-b444-bf5c59e99024","Type":"ContainerStarted","Data":"698f486e0af8e0c5534defdfefe5937435b71c3bfe66c3ffff637acb941ab011"} Nov 21 20:26:27 crc kubenswrapper[4727]: I1121 20:26:27.818427 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7e68562-155f-4f92-b444-bf5c59e99024" containerID="a0e891faefa4052093484717919ae2477863ffef4dadc9a5f5528f1b176d7336" exitCode=0 Nov 21 20:26:27 crc kubenswrapper[4727]: I1121 20:26:27.819220 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" event={"ID":"b7e68562-155f-4f92-b444-bf5c59e99024","Type":"ContainerDied","Data":"a0e891faefa4052093484717919ae2477863ffef4dadc9a5f5528f1b176d7336"} Nov 21 20:26:27 crc kubenswrapper[4727]: I1121 20:26:27.855587 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2","Type":"ContainerStarted","Data":"481e6806e1a33e178a49ba950e71b44f69a02c6bea96cae410fa6763b6ce4115"} Nov 21 20:26:27 crc kubenswrapper[4727]: I1121 20:26:27.855973 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ff5d12dc-e65b-41f0-b29b-5c4eea0fada2","Type":"ContainerStarted","Data":"d1de0d426cd6b7a8dc116620dfbd42a2f5caac96960218d0bd8b588650b82ba5"} Nov 21 20:26:27 crc kubenswrapper[4727]: I1121 20:26:27.880672 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"1274f1950ba87084656860998b25fc0a34a89a078d65cfc001e9e65bdd3c2cb0"} Nov 21 20:26:27 crc kubenswrapper[4727]: I1121 20:26:27.880745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"b7e759f990fde016b0fbaf0322532756c9cca0bfa221d1a640c243e8f838e44c"} Nov 21 20:26:27 crc kubenswrapper[4727]: I1121 20:26:27.880758 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"0a7dee7793da1835d1815771261522d3439b85860d455c4edaf453de3deb24e1"} Nov 21 20:26:27 crc kubenswrapper[4727]: I1121 20:26:27.920479 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.920459459 podStartE2EDuration="18.920459459s" podCreationTimestamp="2025-11-21 20:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:27.904347418 +0000 UTC m=+1193.090532462" watchObservedRunningTime="2025-11-21 20:26:27.920459459 +0000 UTC m=+1193.106644503" Nov 21 20:26:28 crc kubenswrapper[4727]: I1121 20:26:28.896930 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" event={"ID":"b7e68562-155f-4f92-b444-bf5c59e99024","Type":"ContainerStarted","Data":"3dd102538622b59fccc3c3341d9b72fe2b4b98cdcccd4115279fc2475cefd719"} Nov 21 20:26:28 crc kubenswrapper[4727]: I1121 20:26:28.897368 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:28 crc kubenswrapper[4727]: I1121 20:26:28.904415 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"9042bfe3551fc9f785b5bbc478fe7809912ca47a3ee2fa6493ee5eb4e14bb239"} Nov 21 20:26:28 crc kubenswrapper[4727]: I1121 20:26:28.904467 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"a921351c545bbb60474008b46b66d8dee0a588139604b4b10b5335488d0d6d91"} Nov 21 20:26:28 crc kubenswrapper[4727]: I1121 20:26:28.904482 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"babc23f88ee7740f9042a5463cc574b694f2cbcb710ef16daec46bc3259c6492"} Nov 21 20:26:28 crc kubenswrapper[4727]: I1121 20:26:28.904493 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"29faf340-95f4-4bd3-bd87-f2e971a0e494","Type":"ContainerStarted","Data":"6a61cf25da10aad6735ad2bb401468f0d523679815c64f92093d62acc3fac187"} Nov 21 20:26:28 crc kubenswrapper[4727]: I1121 20:26:28.906460 4727 generic.go:334] "Generic (PLEG): container finished" podID="f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7" containerID="32fede56cc063c09059cc4dc687d122db2760c62019b2b140477fb00117876a6" exitCode=0 Nov 21 20:26:28 crc kubenswrapper[4727]: I1121 20:26:28.906535 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6fn75" event={"ID":"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7","Type":"ContainerDied","Data":"32fede56cc063c09059cc4dc687d122db2760c62019b2b140477fb00117876a6"} Nov 21 20:26:28 crc kubenswrapper[4727]: I1121 20:26:28.939127 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" podStartSLOduration=3.939104611 podStartE2EDuration="3.939104611s" podCreationTimestamp="2025-11-21 20:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:28.920157771 +0000 UTC m=+1194.106342815" watchObservedRunningTime="2025-11-21 20:26:28.939104611 +0000 UTC m=+1194.125289655" Nov 21 20:26:28 crc kubenswrapper[4727]: I1121 20:26:28.979061 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.504262202 podStartE2EDuration="50.978943472s" podCreationTimestamp="2025-11-21 20:25:38 +0000 UTC" firstStartedPulling="2025-11-21 20:26:12.133367683 +0000 UTC m=+1177.319552727" lastFinishedPulling="2025-11-21 20:26:26.608048953 +0000 UTC m=+1191.794233997" observedRunningTime="2025-11-21 20:26:28.978195183 +0000 UTC m=+1194.164380227" watchObservedRunningTime="2025-11-21 20:26:28.978943472 +0000 UTC m=+1194.165128516" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.388609 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gx5d7"] Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.439275 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-gbmn9"] Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.442818 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.448529 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.457577 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-gbmn9"] Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.563081 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.563151 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfqhw\" (UniqueName: \"kubernetes.io/projected/5938ab71-ffe2-416d-bc16-4f927dbab94b-kube-api-access-mfqhw\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.563403 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.563724 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-config\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.564049 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.564077 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.665468 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-config\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.665581 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.665601 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.666467 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.666509 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-config\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.666570 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.666901 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.667270 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.667342 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfqhw\" (UniqueName: \"kubernetes.io/projected/5938ab71-ffe2-416d-bc16-4f927dbab94b-kube-api-access-mfqhw\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.667646 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.668313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.687539 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfqhw\" (UniqueName: \"kubernetes.io/projected/5938ab71-ffe2-416d-bc16-4f927dbab94b-kube-api-access-mfqhw\") pod \"dnsmasq-dns-5f59b8f679-gbmn9\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:29 crc kubenswrapper[4727]: I1121 20:26:29.769598 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.004336 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.279052 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-gbmn9"] Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.414247 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.485593 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-config-data\") pod \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.486035 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-combined-ca-bundle\") pod \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.486117 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5fzg\" (UniqueName: \"kubernetes.io/projected/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-kube-api-access-h5fzg\") pod \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\" (UID: \"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7\") " Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.493219 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-kube-api-access-h5fzg" (OuterVolumeSpecName: "kube-api-access-h5fzg") pod "f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7" (UID: "f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7"). InnerVolumeSpecName "kube-api-access-h5fzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.526101 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7" (UID: "f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.548977 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-config-data" (OuterVolumeSpecName: "config-data") pod "f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7" (UID: "f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.588905 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.588945 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5fzg\" (UniqueName: \"kubernetes.io/projected/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-kube-api-access-h5fzg\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.588968 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.941899 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6fn75" event={"ID":"f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7","Type":"ContainerDied","Data":"88014f48306e5593aa63bc16dc09590403f6b1f98ebecf435bd432183cb1ad3d"} Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.941971 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88014f48306e5593aa63bc16dc09590403f6b1f98ebecf435bd432183cb1ad3d" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.942056 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6fn75" Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.944722 4727 generic.go:334] "Generic (PLEG): container finished" podID="5938ab71-ffe2-416d-bc16-4f927dbab94b" containerID="58a8a9915d9a7c7124a5a8a40854e96c886d8a68f0000227ba1427a554d28d99" exitCode=0 Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.944933 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" event={"ID":"5938ab71-ffe2-416d-bc16-4f927dbab94b","Type":"ContainerDied","Data":"58a8a9915d9a7c7124a5a8a40854e96c886d8a68f0000227ba1427a554d28d99"} Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.945015 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" event={"ID":"5938ab71-ffe2-416d-bc16-4f927dbab94b","Type":"ContainerStarted","Data":"cf6b98b353655fb3aa1d88c40be2fe4c9117787ed4c2951bca4fb4df74fa2d45"} Nov 21 20:26:30 crc kubenswrapper[4727]: I1121 20:26:30.945077 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" podUID="b7e68562-155f-4f92-b444-bf5c59e99024" containerName="dnsmasq-dns" containerID="cri-o://3dd102538622b59fccc3c3341d9b72fe2b4b98cdcccd4115279fc2475cefd719" gracePeriod=10 Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.246041 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-gbmn9"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.284769 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2f2mk"] Nov 21 20:26:31 crc kubenswrapper[4727]: E1121 20:26:31.285570 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7" containerName="keystone-db-sync" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.285586 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7" containerName="keystone-db-sync" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.285851 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7" containerName="keystone-db-sync" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.286884 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.292469 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.292508 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xgkf5" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.292841 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.293597 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.293857 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.301150 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2f2mk"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.313649 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s25kw"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.329558 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s25kw"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.329689 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.364795 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-kzpzx"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.372145 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.385641 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.386608 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-c96nh" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420311 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-combined-ca-bundle\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420365 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420410 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-credential-keys\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420438 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-config-data\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420453 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420471 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-config\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420498 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-scripts\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420540 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-fernet-keys\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420568 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssg7x\" (UniqueName: \"kubernetes.io/projected/655afa8b-f9c9-44d7-bc05-7b77181a409a-kube-api-access-ssg7x\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420597 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898m2\" (UniqueName: \"kubernetes.io/projected/0429f9d4-473a-40f1-8b52-ff45821ccdd6-kube-api-access-898m2\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420636 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.420681 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.422424 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-kzpzx"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.441551 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2t297"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.446038 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.451548 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-drn42" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.451628 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.451815 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.489940 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2t297"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.521877 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-credential-keys\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.521944 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-config\") pod \"neutron-db-sync-2t297\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.521975 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-config-data\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.521990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522012 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-combined-ca-bundle\") pod \"neutron-db-sync-2t297\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522033 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-config\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522071 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-scripts\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522112 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-fernet-keys\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522135 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-config-data\") pod \"heat-db-sync-kzpzx\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522153 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-254k6\" (UniqueName: \"kubernetes.io/projected/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-kube-api-access-254k6\") pod \"heat-db-sync-kzpzx\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522174 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssg7x\" (UniqueName: \"kubernetes.io/projected/655afa8b-f9c9-44d7-bc05-7b77181a409a-kube-api-access-ssg7x\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522196 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-898m2\" (UniqueName: \"kubernetes.io/projected/0429f9d4-473a-40f1-8b52-ff45821ccdd6-kube-api-access-898m2\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522231 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522289 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522314 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-combined-ca-bundle\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522334 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-combined-ca-bundle\") pod \"heat-db-sync-kzpzx\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522352 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229rf\" (UniqueName: \"kubernetes.io/projected/1b0e3c92-f23c-4257-856c-3bb4496913e2-kube-api-access-229rf\") pod \"neutron-db-sync-2t297\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.522375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.523544 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.523648 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-config\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.524427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.524443 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.524928 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.529976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-fernet-keys\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.530578 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-combined-ca-bundle\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.541549 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-scripts\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.541970 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-credential-keys\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.564583 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-898m2\" (UniqueName: \"kubernetes.io/projected/0429f9d4-473a-40f1-8b52-ff45821ccdd6-kube-api-access-898m2\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.570667 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-config-data\") pod \"keystone-bootstrap-2f2mk\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.575746 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssg7x\" (UniqueName: \"kubernetes.io/projected/655afa8b-f9c9-44d7-bc05-7b77181a409a-kube-api-access-ssg7x\") pod \"dnsmasq-dns-bbf5cc879-s25kw\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.622259 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.623776 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-config\") pod \"neutron-db-sync-2t297\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.623812 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-combined-ca-bundle\") pod \"neutron-db-sync-2t297\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.623898 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-config-data\") pod \"heat-db-sync-kzpzx\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.623923 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-254k6\" (UniqueName: \"kubernetes.io/projected/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-kube-api-access-254k6\") pod \"heat-db-sync-kzpzx\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.624074 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-combined-ca-bundle\") pod \"heat-db-sync-kzpzx\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.624094 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229rf\" (UniqueName: \"kubernetes.io/projected/1b0e3c92-f23c-4257-856c-3bb4496913e2-kube-api-access-229rf\") pod \"neutron-db-sync-2t297\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.634334 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kjphx"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.636020 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.654681 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-config\") pod \"neutron-db-sync-2t297\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.658636 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-combined-ca-bundle\") pod \"neutron-db-sync-2t297\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.659713 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-config-data\") pod \"heat-db-sync-kzpzx\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.667984 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-combined-ca-bundle\") pod \"heat-db-sync-kzpzx\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.680621 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.681043 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tvpnn" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.681150 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.695711 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-254k6\" (UniqueName: \"kubernetes.io/projected/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-kube-api-access-254k6\") pod \"heat-db-sync-kzpzx\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.702619 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229rf\" (UniqueName: \"kubernetes.io/projected/1b0e3c92-f23c-4257-856c-3bb4496913e2-kube-api-access-229rf\") pod \"neutron-db-sync-2t297\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.711418 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kjphx"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.721720 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.726185 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kzpzx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.727384 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbnkl\" (UniqueName: \"kubernetes.io/projected/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-kube-api-access-fbnkl\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.727412 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-combined-ca-bundle\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.727466 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-db-sync-config-data\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.727490 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-etc-machine-id\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.727521 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-config-data\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.727568 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-scripts\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.814019 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-nvhzz"] Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.815552 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.822332 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2t297" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.823420 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-q6dpd" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.825579 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.843341 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-config-data\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.843419 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-scripts\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.843533 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbnkl\" (UniqueName: \"kubernetes.io/projected/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-kube-api-access-fbnkl\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.843559 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-combined-ca-bundle\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.843630 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-db-sync-config-data\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.843667 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-etc-machine-id\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.843766 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-etc-machine-id\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.860464 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-scripts\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.860685 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-db-sync-config-data\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.879219 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-config-data\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.890497 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-combined-ca-bundle\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.896614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbnkl\" (UniqueName: \"kubernetes.io/projected/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-kube-api-access-fbnkl\") pod \"cinder-db-sync-kjphx\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.948229 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-db-sync-config-data\") pod \"barbican-db-sync-nvhzz\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.948357 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-combined-ca-bundle\") pod \"barbican-db-sync-nvhzz\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.948385 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm6kr\" (UniqueName: \"kubernetes.io/projected/3219ae94-1940-49e8-851c-102a14d22e75-kube-api-access-rm6kr\") pod \"barbican-db-sync-nvhzz\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:31 crc kubenswrapper[4727]: I1121 20:26:31.948556 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nvhzz"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.024138 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s25kw"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.051561 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-db-sync-config-data\") pod \"barbican-db-sync-nvhzz\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.051681 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-combined-ca-bundle\") pod \"barbican-db-sync-nvhzz\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.051712 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm6kr\" (UniqueName: \"kubernetes.io/projected/3219ae94-1940-49e8-851c-102a14d22e75-kube-api-access-rm6kr\") pod \"barbican-db-sync-nvhzz\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.086809 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-combined-ca-bundle\") pod \"barbican-db-sync-nvhzz\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.120764 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kjphx" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.141464 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7e68562-155f-4f92-b444-bf5c59e99024" containerID="3dd102538622b59fccc3c3341d9b72fe2b4b98cdcccd4115279fc2475cefd719" exitCode=0 Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.146989 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" event={"ID":"b7e68562-155f-4f92-b444-bf5c59e99024","Type":"ContainerDied","Data":"3dd102538622b59fccc3c3341d9b72fe2b4b98cdcccd4115279fc2475cefd719"} Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.147604 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" event={"ID":"b7e68562-155f-4f92-b444-bf5c59e99024","Type":"ContainerDied","Data":"698f486e0af8e0c5534defdfefe5937435b71c3bfe66c3ffff637acb941ab011"} Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.147623 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="698f486e0af8e0c5534defdfefe5937435b71c3bfe66c3ffff637acb941ab011" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.147566 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-db-sync-config-data\") pod \"barbican-db-sync-nvhzz\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.179596 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-xl6pb"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.182519 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.196795 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm6kr\" (UniqueName: \"kubernetes.io/projected/3219ae94-1940-49e8-851c-102a14d22e75-kube-api-access-rm6kr\") pod \"barbican-db-sync-nvhzz\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.228996 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-xl6pb"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.257829 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.257876 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.257899 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.257948 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-config\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.257987 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.258064 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lnld\" (UniqueName: \"kubernetes.io/projected/f634df90-2dae-4e5b-b938-38d5d75f9b00-kube-api-access-6lnld\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.301454 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5x7sl"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.302885 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.316349 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rgmh5" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.316649 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.319232 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.319426 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5x7sl"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.392122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.392178 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.392219 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.392250 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-config\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.392282 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.392378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lnld\" (UniqueName: \"kubernetes.io/projected/f634df90-2dae-4e5b-b938-38d5d75f9b00-kube-api-access-6lnld\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.542207 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-combined-ca-bundle\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.542705 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5gxx\" (UniqueName: \"kubernetes.io/projected/6f8801f4-f168-4ae7-b364-cd95a72b3a66-kube-api-access-d5gxx\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.543091 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8801f4-f168-4ae7-b364-cd95a72b3a66-logs\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.543328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-scripts\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.543416 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-config-data\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.559583 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.559785 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.560108 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-config\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.561073 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.561391 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.565982 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lnld\" (UniqueName: \"kubernetes.io/projected/f634df90-2dae-4e5b-b938-38d5d75f9b00-kube-api-access-6lnld\") pod \"dnsmasq-dns-56df8fb6b7-xl6pb\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.587511 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.609000 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.620466 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-24s9s" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.620621 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.620891 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.621062 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.624141 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.626460 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.637019 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.639936 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.641210 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.647470 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.647746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-scripts\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.647822 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-config-data\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.647868 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-combined-ca-bundle\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.647906 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5gxx\" (UniqueName: \"kubernetes.io/projected/6f8801f4-f168-4ae7-b364-cd95a72b3a66-kube-api-access-d5gxx\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.648061 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8801f4-f168-4ae7-b364-cd95a72b3a66-logs\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.649847 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8801f4-f168-4ae7-b364-cd95a72b3a66-logs\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.652250 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-scripts\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.656727 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-config-data\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.658452 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.682804 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-combined-ca-bundle\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.687191 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.723617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5gxx\" (UniqueName: \"kubernetes.io/projected/6f8801f4-f168-4ae7-b364-cd95a72b3a66-kube-api-access-d5gxx\") pod \"placement-db-sync-5x7sl\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750313 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750370 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750394 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750433 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-config-data\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750482 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-logs\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750502 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750530 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750551 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750581 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-scripts\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750604 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750632 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xn4l\" (UniqueName: \"kubernetes.io/projected/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-kube-api-access-8xn4l\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750674 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750696 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.750716 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9zlt\" (UniqueName: \"kubernetes.io/projected/f246d4ba-7300-45cd-8206-e9e948fdcf52-kube-api-access-p9zlt\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.853818 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-config\") pod \"b7e68562-155f-4f92-b444-bf5c59e99024\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.854772 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-nb\") pod \"b7e68562-155f-4f92-b444-bf5c59e99024\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.854814 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftjsl\" (UniqueName: \"kubernetes.io/projected/b7e68562-155f-4f92-b444-bf5c59e99024-kube-api-access-ftjsl\") pod \"b7e68562-155f-4f92-b444-bf5c59e99024\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.854878 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-dns-svc\") pod \"b7e68562-155f-4f92-b444-bf5c59e99024\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.854931 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-sb\") pod \"b7e68562-155f-4f92-b444-bf5c59e99024\" (UID: \"b7e68562-155f-4f92-b444-bf5c59e99024\") " Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855181 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855208 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855240 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-scripts\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855268 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855310 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xn4l\" (UniqueName: \"kubernetes.io/projected/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-kube-api-access-8xn4l\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855348 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855370 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9zlt\" (UniqueName: \"kubernetes.io/projected/f246d4ba-7300-45cd-8206-e9e948fdcf52-kube-api-access-p9zlt\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855503 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855533 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855590 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855606 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855656 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-config-data\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855676 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855709 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-logs\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.855725 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.868236 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.871995 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.873589 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.893017 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-logs\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.893265 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e68562-155f-4f92-b444-bf5c59e99024-kube-api-access-ftjsl" (OuterVolumeSpecName: "kube-api-access-ftjsl") pod "b7e68562-155f-4f92-b444-bf5c59e99024" (UID: "b7e68562-155f-4f92-b444-bf5c59e99024"). InnerVolumeSpecName "kube-api-access-ftjsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.914609 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.921606 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.922052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.923510 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.924691 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.925769 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.931199 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:26:32 crc kubenswrapper[4727]: E1121 20:26:32.931875 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e68562-155f-4f92-b444-bf5c59e99024" containerName="dnsmasq-dns" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.931887 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e68562-155f-4f92-b444-bf5c59e99024" containerName="dnsmasq-dns" Nov 21 20:26:32 crc kubenswrapper[4727]: E1121 20:26:32.931910 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e68562-155f-4f92-b444-bf5c59e99024" containerName="init" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.931919 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e68562-155f-4f92-b444-bf5c59e99024" containerName="init" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.932130 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e68562-155f-4f92-b444-bf5c59e99024" containerName="dnsmasq-dns" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.934865 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9zlt\" (UniqueName: \"kubernetes.io/projected/f246d4ba-7300-45cd-8206-e9e948fdcf52-kube-api-access-p9zlt\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.935470 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.944027 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xn4l\" (UniqueName: \"kubernetes.io/projected/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-kube-api-access-8xn4l\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.955711 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.955833 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.958642 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftjsl\" (UniqueName: \"kubernetes.io/projected/b7e68562-155f-4f92-b444-bf5c59e99024-kube-api-access-ftjsl\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.973927 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.974506 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.974824 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.975933 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-scripts\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:32 crc kubenswrapper[4727]: I1121 20:26:32.995549 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-config-data\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.029318 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.029642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.054179 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b7e68562-155f-4f92-b444-bf5c59e99024" (UID: "b7e68562-155f-4f92-b444-bf5c59e99024"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.062828 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-scripts\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.062929 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.062951 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-run-httpd\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.062985 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vncrb\" (UniqueName: \"kubernetes.io/projected/ab55a565-2af1-48bb-a31e-d0a8c738912c-kube-api-access-vncrb\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.063028 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.063078 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-log-httpd\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.063101 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-config-data\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.063176 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.123091 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7e68562-155f-4f92-b444-bf5c59e99024" (UID: "b7e68562-155f-4f92-b444-bf5c59e99024"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.123292 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b7e68562-155f-4f92-b444-bf5c59e99024" (UID: "b7e68562-155f-4f92-b444-bf5c59e99024"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.124689 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2f2mk"] Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.130519 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:33 crc kubenswrapper[4727]: W1121 20:26:33.138213 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0429f9d4_473a_40f1_8b52_ff45821ccdd6.slice/crio-c3f032f8ff06bc86fd5f3e8f73398ce0a01ec0fc968c6be824f2dfe58b2b9085 WatchSource:0}: Error finding container c3f032f8ff06bc86fd5f3e8f73398ce0a01ec0fc968c6be824f2dfe58b2b9085: Status 404 returned error can't find the container with id c3f032f8ff06bc86fd5f3e8f73398ce0a01ec0fc968c6be824f2dfe58b2b9085 Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.139434 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-config" (OuterVolumeSpecName: "config") pod "b7e68562-155f-4f92-b444-bf5c59e99024" (UID: "b7e68562-155f-4f92-b444-bf5c59e99024"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.162693 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5x7sl" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.165812 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.172345 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-log-httpd\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.172420 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-config-data\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.172601 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-scripts\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.172819 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.172858 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-log-httpd\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.172881 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-run-httpd\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.172408 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" event={"ID":"5938ab71-ffe2-416d-bc16-4f927dbab94b","Type":"ContainerStarted","Data":"c819bc662281ef6fe59dfdc9ed7cf1e4ec541fb1f198a69a871406e8b8f4458f"} Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.172941 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" podUID="5938ab71-ffe2-416d-bc16-4f927dbab94b" containerName="dnsmasq-dns" containerID="cri-o://c819bc662281ef6fe59dfdc9ed7cf1e4ec541fb1f198a69a871406e8b8f4458f" gracePeriod=10 Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.173347 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.175042 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.176051 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2f2mk" event={"ID":"0429f9d4-473a-40f1-8b52-ff45821ccdd6","Type":"ContainerStarted","Data":"c3f032f8ff06bc86fd5f3e8f73398ce0a01ec0fc968c6be824f2dfe58b2b9085"} Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.176076 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-gx5d7" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.176817 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-config-data\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.177359 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-run-httpd\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.177899 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vncrb\" (UniqueName: \"kubernetes.io/projected/ab55a565-2af1-48bb-a31e-d0a8c738912c-kube-api-access-vncrb\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.178276 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.181327 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.181348 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e68562-155f-4f92-b444-bf5c59e99024-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.189795 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.193771 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-scripts\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.205798 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.208176 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vncrb\" (UniqueName: \"kubernetes.io/projected/ab55a565-2af1-48bb-a31e-d0a8c738912c-kube-api-access-vncrb\") pod \"ceilometer-0\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.216911 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" podStartSLOduration=4.2168818439999995 podStartE2EDuration="4.216881844s" podCreationTimestamp="2025-11-21 20:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:33.195089012 +0000 UTC m=+1198.381274056" watchObservedRunningTime="2025-11-21 20:26:33.216881844 +0000 UTC m=+1198.403066888" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.262041 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gx5d7"] Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.276581 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.277978 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.283221 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-gx5d7"] Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.341625 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-kzpzx"] Nov 21 20:26:33 crc kubenswrapper[4727]: I1121 20:26:33.564544 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7e68562-155f-4f92-b444-bf5c59e99024" path="/var/lib/kubelet/pods/b7e68562-155f-4f92-b444-bf5c59e99024/volumes" Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:33.998530 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s25kw"] Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.012294 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2t297"] Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.016735 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kjphx"] Nov 21 20:26:34 crc kubenswrapper[4727]: W1121 20:26:34.029024 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf819a7d6_b6ab_409c_aaf2_e5044d9317d5.slice/crio-97234f41f23a6fdb1313d8ec481de06897693d092964aa934dfbfc9e81d148e9 WatchSource:0}: Error finding container 97234f41f23a6fdb1313d8ec481de06897693d092964aa934dfbfc9e81d148e9: Status 404 returned error can't find the container with id 97234f41f23a6fdb1313d8ec481de06897693d092964aa934dfbfc9e81d148e9 Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.273874 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nvhzz"] Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.350213 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2f2mk" event={"ID":"0429f9d4-473a-40f1-8b52-ff45821ccdd6","Type":"ContainerStarted","Data":"ced7a2e5adf35a2a529855a6c1e52d4689bf1f1ac766d21277c6c904b49cba63"} Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.354661 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.361755 4727 generic.go:334] "Generic (PLEG): container finished" podID="5938ab71-ffe2-416d-bc16-4f927dbab94b" containerID="c819bc662281ef6fe59dfdc9ed7cf1e4ec541fb1f198a69a871406e8b8f4458f" exitCode=0 Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.361832 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" event={"ID":"5938ab71-ffe2-416d-bc16-4f927dbab94b","Type":"ContainerDied","Data":"c819bc662281ef6fe59dfdc9ed7cf1e4ec541fb1f198a69a871406e8b8f4458f"} Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.361913 4727 scope.go:117] "RemoveContainer" containerID="c819bc662281ef6fe59dfdc9ed7cf1e4ec541fb1f198a69a871406e8b8f4458f" Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.379420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" event={"ID":"655afa8b-f9c9-44d7-bc05-7b77181a409a","Type":"ContainerStarted","Data":"372a3bad63f5a2be926c0e38830a42d08476fca70b8f03bbf3d9935f57f79723"} Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.388069 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2f2mk" podStartSLOduration=3.388029988 podStartE2EDuration="3.388029988s" podCreationTimestamp="2025-11-21 20:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:34.372826719 +0000 UTC m=+1199.559011763" watchObservedRunningTime="2025-11-21 20:26:34.388029988 +0000 UTC m=+1199.574215032" Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.393991 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kjphx" event={"ID":"f819a7d6-b6ab-409c-aaf2-e5044d9317d5","Type":"ContainerStarted","Data":"97234f41f23a6fdb1313d8ec481de06897693d092964aa934dfbfc9e81d148e9"} Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.423098 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kzpzx" event={"ID":"ba5291ec-9ad1-4ce3-8794-6b6ca611b277","Type":"ContainerStarted","Data":"e231612893f8605e49ff827b4acadcf8e12579fb53f2ed061545a19c226c6a53"} Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.436791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2t297" event={"ID":"1b0e3c92-f23c-4257-856c-3bb4496913e2","Type":"ContainerStarted","Data":"c9d9343e58b9312a6c956a58c867346ccf0ef08beac27e1e07db4154a111f572"} Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.444564 4727 scope.go:117] "RemoveContainer" containerID="58a8a9915d9a7c7124a5a8a40854e96c886d8a68f0000227ba1427a554d28d99" Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.481022 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-sb\") pod \"5938ab71-ffe2-416d-bc16-4f927dbab94b\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.481124 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-svc\") pod \"5938ab71-ffe2-416d-bc16-4f927dbab94b\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.481162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfqhw\" (UniqueName: \"kubernetes.io/projected/5938ab71-ffe2-416d-bc16-4f927dbab94b-kube-api-access-mfqhw\") pod \"5938ab71-ffe2-416d-bc16-4f927dbab94b\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.481262 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-config\") pod \"5938ab71-ffe2-416d-bc16-4f927dbab94b\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.481289 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-nb\") pod \"5938ab71-ffe2-416d-bc16-4f927dbab94b\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.481384 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-swift-storage-0\") pod \"5938ab71-ffe2-416d-bc16-4f927dbab94b\" (UID: \"5938ab71-ffe2-416d-bc16-4f927dbab94b\") " Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.537172 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5938ab71-ffe2-416d-bc16-4f927dbab94b-kube-api-access-mfqhw" (OuterVolumeSpecName: "kube-api-access-mfqhw") pod "5938ab71-ffe2-416d-bc16-4f927dbab94b" (UID: "5938ab71-ffe2-416d-bc16-4f927dbab94b"). InnerVolumeSpecName "kube-api-access-mfqhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.592303 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfqhw\" (UniqueName: \"kubernetes.io/projected/5938ab71-ffe2-416d-bc16-4f927dbab94b-kube-api-access-mfqhw\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.760067 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5x7sl"] Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.767526 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-xl6pb"] Nov 21 20:26:34 crc kubenswrapper[4727]: I1121 20:26:34.995229 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.049238 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5938ab71-ffe2-416d-bc16-4f927dbab94b" (UID: "5938ab71-ffe2-416d-bc16-4f927dbab94b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.137532 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.158148 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.200736 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.220667 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5938ab71-ffe2-416d-bc16-4f927dbab94b" (UID: "5938ab71-ffe2-416d-bc16-4f927dbab94b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.246001 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.250376 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.302613 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5938ab71-ffe2-416d-bc16-4f927dbab94b" (UID: "5938ab71-ffe2-416d-bc16-4f927dbab94b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.342910 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5938ab71-ffe2-416d-bc16-4f927dbab94b" (UID: "5938ab71-ffe2-416d-bc16-4f927dbab94b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.347812 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.350255 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-config" (OuterVolumeSpecName: "config") pod "5938ab71-ffe2-416d-bc16-4f927dbab94b" (UID: "5938ab71-ffe2-416d-bc16-4f927dbab94b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.361228 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.361256 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.361265 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5938ab71-ffe2-416d-bc16-4f927dbab94b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.478143 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f246d4ba-7300-45cd-8206-e9e948fdcf52","Type":"ContainerStarted","Data":"0261ef827190b88eca80540f7637692e8e4b0dcc0e99a634c1eae2e9aa5f7a55"} Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.486168 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nvhzz" event={"ID":"3219ae94-1940-49e8-851c-102a14d22e75","Type":"ContainerStarted","Data":"659dac461540d044f7980bcebbdaeed98b16d08459d892258120c2063d40102c"} Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.487288 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab55a565-2af1-48bb-a31e-d0a8c738912c","Type":"ContainerStarted","Data":"3052dd83ceb177cca7d883a2ae31cb04ecfee6a7a3601f83e578cbf434fd1ded"} Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.488273 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" event={"ID":"f634df90-2dae-4e5b-b938-38d5d75f9b00","Type":"ContainerStarted","Data":"dca2988eaa9e7b9874f130892761533e5436901e9ef4bc1b70442345163a6459"} Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.541285 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.547784 4727 generic.go:334] "Generic (PLEG): container finished" podID="655afa8b-f9c9-44d7-bc05-7b77181a409a" containerID="5c54e1f65456922b94b501e53e8a2a59a27ce233b24fd75addefafbc1edfd1ee" exitCode=0 Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.551752 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2t297" event={"ID":"1b0e3c92-f23c-4257-856c-3bb4496913e2","Type":"ContainerStarted","Data":"40579ab4c3190291bb8115f01a56949aa96618d929c80bc83463f5d1fa4c4628"} Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.551924 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-gbmn9" event={"ID":"5938ab71-ffe2-416d-bc16-4f927dbab94b","Type":"ContainerDied","Data":"cf6b98b353655fb3aa1d88c40be2fe4c9117787ed4c2951bca4fb4df74fa2d45"} Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.552061 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5x7sl" event={"ID":"6f8801f4-f168-4ae7-b364-cd95a72b3a66","Type":"ContainerStarted","Data":"2f89c21de83c586d7260923a3c9651f6f6e0017f84cb1aae50a7808d8e72fa03"} Nov 21 20:26:35 crc kubenswrapper[4727]: I1121 20:26:35.552147 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" event={"ID":"655afa8b-f9c9-44d7-bc05-7b77181a409a","Type":"ContainerDied","Data":"5c54e1f65456922b94b501e53e8a2a59a27ce233b24fd75addefafbc1edfd1ee"} Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.080048 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.095058 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2t297" podStartSLOduration=5.095037452 podStartE2EDuration="5.095037452s" podCreationTimestamp="2025-11-21 20:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:35.960513048 +0000 UTC m=+1201.146698092" watchObservedRunningTime="2025-11-21 20:26:36.095037452 +0000 UTC m=+1201.281222496" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.116638 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-gbmn9"] Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.126533 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-gbmn9"] Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.296221 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.355939 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-svc\") pod \"655afa8b-f9c9-44d7-bc05-7b77181a409a\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.356084 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-sb\") pod \"655afa8b-f9c9-44d7-bc05-7b77181a409a\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.356114 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssg7x\" (UniqueName: \"kubernetes.io/projected/655afa8b-f9c9-44d7-bc05-7b77181a409a-kube-api-access-ssg7x\") pod \"655afa8b-f9c9-44d7-bc05-7b77181a409a\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.356141 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-config\") pod \"655afa8b-f9c9-44d7-bc05-7b77181a409a\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.356181 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-swift-storage-0\") pod \"655afa8b-f9c9-44d7-bc05-7b77181a409a\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.356473 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-nb\") pod \"655afa8b-f9c9-44d7-bc05-7b77181a409a\" (UID: \"655afa8b-f9c9-44d7-bc05-7b77181a409a\") " Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.382438 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/655afa8b-f9c9-44d7-bc05-7b77181a409a-kube-api-access-ssg7x" (OuterVolumeSpecName: "kube-api-access-ssg7x") pod "655afa8b-f9c9-44d7-bc05-7b77181a409a" (UID: "655afa8b-f9c9-44d7-bc05-7b77181a409a"). InnerVolumeSpecName "kube-api-access-ssg7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.405557 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "655afa8b-f9c9-44d7-bc05-7b77181a409a" (UID: "655afa8b-f9c9-44d7-bc05-7b77181a409a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.418675 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "655afa8b-f9c9-44d7-bc05-7b77181a409a" (UID: "655afa8b-f9c9-44d7-bc05-7b77181a409a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.469235 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.469285 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssg7x\" (UniqueName: \"kubernetes.io/projected/655afa8b-f9c9-44d7-bc05-7b77181a409a-kube-api-access-ssg7x\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.469298 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.483609 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-config" (OuterVolumeSpecName: "config") pod "655afa8b-f9c9-44d7-bc05-7b77181a409a" (UID: "655afa8b-f9c9-44d7-bc05-7b77181a409a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.509731 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "655afa8b-f9c9-44d7-bc05-7b77181a409a" (UID: "655afa8b-f9c9-44d7-bc05-7b77181a409a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.515671 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "655afa8b-f9c9-44d7-bc05-7b77181a409a" (UID: "655afa8b-f9c9-44d7-bc05-7b77181a409a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.575180 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.575230 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.575244 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/655afa8b-f9c9-44d7-bc05-7b77181a409a-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.590804 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ff4259-d3c2-43db-9ff7-e420bf11de4e","Type":"ContainerStarted","Data":"9dbd99875557b675097013af497dc321c9df4800b85d21fbfedd8dc9ea11a980"} Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.598331 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" event={"ID":"655afa8b-f9c9-44d7-bc05-7b77181a409a","Type":"ContainerDied","Data":"372a3bad63f5a2be926c0e38830a42d08476fca70b8f03bbf3d9935f57f79723"} Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.598414 4727 scope.go:117] "RemoveContainer" containerID="5c54e1f65456922b94b501e53e8a2a59a27ce233b24fd75addefafbc1edfd1ee" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.598620 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-s25kw" Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.609639 4727 generic.go:334] "Generic (PLEG): container finished" podID="f634df90-2dae-4e5b-b938-38d5d75f9b00" containerID="754d6bb1c99def91198eecbf6de8115c40ad6ba01d5ce5988ddc264f9b28bad7" exitCode=0 Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.611921 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" event={"ID":"f634df90-2dae-4e5b-b938-38d5d75f9b00","Type":"ContainerDied","Data":"754d6bb1c99def91198eecbf6de8115c40ad6ba01d5ce5988ddc264f9b28bad7"} Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.790847 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s25kw"] Nov 21 20:26:36 crc kubenswrapper[4727]: I1121 20:26:36.804749 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-s25kw"] Nov 21 20:26:37 crc kubenswrapper[4727]: I1121 20:26:37.558008 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5938ab71-ffe2-416d-bc16-4f927dbab94b" path="/var/lib/kubelet/pods/5938ab71-ffe2-416d-bc16-4f927dbab94b/volumes" Nov 21 20:26:37 crc kubenswrapper[4727]: I1121 20:26:37.559298 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="655afa8b-f9c9-44d7-bc05-7b77181a409a" path="/var/lib/kubelet/pods/655afa8b-f9c9-44d7-bc05-7b77181a409a/volumes" Nov 21 20:26:37 crc kubenswrapper[4727]: I1121 20:26:37.669254 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" event={"ID":"f634df90-2dae-4e5b-b938-38d5d75f9b00","Type":"ContainerStarted","Data":"b51e0f2fdf2ea5f7d1e198af97b55aea16e4bfc27571067c43c09dbaa7074e36"} Nov 21 20:26:37 crc kubenswrapper[4727]: I1121 20:26:37.670498 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:37 crc kubenswrapper[4727]: I1121 20:26:37.684932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f246d4ba-7300-45cd-8206-e9e948fdcf52","Type":"ContainerStarted","Data":"f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468"} Nov 21 20:26:37 crc kubenswrapper[4727]: I1121 20:26:37.710889 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" podStartSLOduration=6.710866421 podStartE2EDuration="6.710866421s" podCreationTimestamp="2025-11-21 20:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:37.699703963 +0000 UTC m=+1202.885889007" watchObservedRunningTime="2025-11-21 20:26:37.710866421 +0000 UTC m=+1202.897051465" Nov 21 20:26:38 crc kubenswrapper[4727]: I1121 20:26:38.706548 4727 generic.go:334] "Generic (PLEG): container finished" podID="0429f9d4-473a-40f1-8b52-ff45821ccdd6" containerID="ced7a2e5adf35a2a529855a6c1e52d4689bf1f1ac766d21277c6c904b49cba63" exitCode=0 Nov 21 20:26:38 crc kubenswrapper[4727]: I1121 20:26:38.706656 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2f2mk" event={"ID":"0429f9d4-473a-40f1-8b52-ff45821ccdd6","Type":"ContainerDied","Data":"ced7a2e5adf35a2a529855a6c1e52d4689bf1f1ac766d21277c6c904b49cba63"} Nov 21 20:26:38 crc kubenswrapper[4727]: I1121 20:26:38.711739 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ff4259-d3c2-43db-9ff7-e420bf11de4e","Type":"ContainerStarted","Data":"b3db77698f15f870651506b220728bb1891ead118900b35a863691dc0b044c3c"} Nov 21 20:26:38 crc kubenswrapper[4727]: I1121 20:26:38.716723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f246d4ba-7300-45cd-8206-e9e948fdcf52","Type":"ContainerStarted","Data":"f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648"} Nov 21 20:26:38 crc kubenswrapper[4727]: I1121 20:26:38.716925 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f246d4ba-7300-45cd-8206-e9e948fdcf52" containerName="glance-log" containerID="cri-o://f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468" gracePeriod=30 Nov 21 20:26:38 crc kubenswrapper[4727]: I1121 20:26:38.717019 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f246d4ba-7300-45cd-8206-e9e948fdcf52" containerName="glance-httpd" containerID="cri-o://f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648" gracePeriod=30 Nov 21 20:26:38 crc kubenswrapper[4727]: I1121 20:26:38.757922 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.757886559 podStartE2EDuration="7.757886559s" podCreationTimestamp="2025-11-21 20:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:38.746953387 +0000 UTC m=+1203.933138431" watchObservedRunningTime="2025-11-21 20:26:38.757886559 +0000 UTC m=+1203.944071603" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.438155 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.487285 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-combined-ca-bundle\") pod \"f246d4ba-7300-45cd-8206-e9e948fdcf52\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.487456 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-logs\") pod \"f246d4ba-7300-45cd-8206-e9e948fdcf52\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.487828 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-httpd-run\") pod \"f246d4ba-7300-45cd-8206-e9e948fdcf52\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.487922 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9zlt\" (UniqueName: \"kubernetes.io/projected/f246d4ba-7300-45cd-8206-e9e948fdcf52-kube-api-access-p9zlt\") pod \"f246d4ba-7300-45cd-8206-e9e948fdcf52\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.488030 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-scripts\") pod \"f246d4ba-7300-45cd-8206-e9e948fdcf52\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.488132 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"f246d4ba-7300-45cd-8206-e9e948fdcf52\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.488164 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-config-data\") pod \"f246d4ba-7300-45cd-8206-e9e948fdcf52\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.488233 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-public-tls-certs\") pod \"f246d4ba-7300-45cd-8206-e9e948fdcf52\" (UID: \"f246d4ba-7300-45cd-8206-e9e948fdcf52\") " Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.493077 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-logs" (OuterVolumeSpecName: "logs") pod "f246d4ba-7300-45cd-8206-e9e948fdcf52" (UID: "f246d4ba-7300-45cd-8206-e9e948fdcf52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.493518 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f246d4ba-7300-45cd-8206-e9e948fdcf52" (UID: "f246d4ba-7300-45cd-8206-e9e948fdcf52"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.507355 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-scripts" (OuterVolumeSpecName: "scripts") pod "f246d4ba-7300-45cd-8206-e9e948fdcf52" (UID: "f246d4ba-7300-45cd-8206-e9e948fdcf52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.508344 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f246d4ba-7300-45cd-8206-e9e948fdcf52-kube-api-access-p9zlt" (OuterVolumeSpecName: "kube-api-access-p9zlt") pod "f246d4ba-7300-45cd-8206-e9e948fdcf52" (UID: "f246d4ba-7300-45cd-8206-e9e948fdcf52"). InnerVolumeSpecName "kube-api-access-p9zlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.536177 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "f246d4ba-7300-45cd-8206-e9e948fdcf52" (UID: "f246d4ba-7300-45cd-8206-e9e948fdcf52"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.570136 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f246d4ba-7300-45cd-8206-e9e948fdcf52" (UID: "f246d4ba-7300-45cd-8206-e9e948fdcf52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.588605 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-config-data" (OuterVolumeSpecName: "config-data") pod "f246d4ba-7300-45cd-8206-e9e948fdcf52" (UID: "f246d4ba-7300-45cd-8206-e9e948fdcf52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.606890 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.618658 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f246d4ba-7300-45cd-8206-e9e948fdcf52-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.618686 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9zlt\" (UniqueName: \"kubernetes.io/projected/f246d4ba-7300-45cd-8206-e9e948fdcf52-kube-api-access-p9zlt\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.618699 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.618729 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.615744 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f246d4ba-7300-45cd-8206-e9e948fdcf52" (UID: "f246d4ba-7300-45cd-8206-e9e948fdcf52"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.618742 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.649659 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.721039 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.721072 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.721082 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f246d4ba-7300-45cd-8206-e9e948fdcf52-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.757363 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ff4259-d3c2-43db-9ff7-e420bf11de4e","Type":"ContainerStarted","Data":"3af5249f76e41a9396339f9461acde2f7d3cedbc126d949b8c62e3e1594f2da7"} Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.757470 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" containerName="glance-log" containerID="cri-o://b3db77698f15f870651506b220728bb1891ead118900b35a863691dc0b044c3c" gracePeriod=30 Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.757852 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" containerName="glance-httpd" containerID="cri-o://3af5249f76e41a9396339f9461acde2f7d3cedbc126d949b8c62e3e1594f2da7" gracePeriod=30 Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.763987 4727 generic.go:334] "Generic (PLEG): container finished" podID="f246d4ba-7300-45cd-8206-e9e948fdcf52" containerID="f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648" exitCode=143 Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.764023 4727 generic.go:334] "Generic (PLEG): container finished" podID="f246d4ba-7300-45cd-8206-e9e948fdcf52" containerID="f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468" exitCode=143 Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.764045 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f246d4ba-7300-45cd-8206-e9e948fdcf52","Type":"ContainerDied","Data":"f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648"} Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.764114 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f246d4ba-7300-45cd-8206-e9e948fdcf52","Type":"ContainerDied","Data":"f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468"} Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.764072 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.764140 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f246d4ba-7300-45cd-8206-e9e948fdcf52","Type":"ContainerDied","Data":"0261ef827190b88eca80540f7637692e8e4b0dcc0e99a634c1eae2e9aa5f7a55"} Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.764156 4727 scope.go:117] "RemoveContainer" containerID="f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.811206 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.811177262 podStartE2EDuration="8.811177262s" podCreationTimestamp="2025-11-21 20:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:26:39.778577712 +0000 UTC m=+1204.964762766" watchObservedRunningTime="2025-11-21 20:26:39.811177262 +0000 UTC m=+1204.997362306" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.844140 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.877050 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.890192 4727 scope.go:117] "RemoveContainer" containerID="f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.900122 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:26:39 crc kubenswrapper[4727]: E1121 20:26:39.912344 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5938ab71-ffe2-416d-bc16-4f927dbab94b" containerName="init" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.912402 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5938ab71-ffe2-416d-bc16-4f927dbab94b" containerName="init" Nov 21 20:26:39 crc kubenswrapper[4727]: E1121 20:26:39.912453 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f246d4ba-7300-45cd-8206-e9e948fdcf52" containerName="glance-log" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.912462 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f246d4ba-7300-45cd-8206-e9e948fdcf52" containerName="glance-log" Nov 21 20:26:39 crc kubenswrapper[4727]: E1121 20:26:39.912508 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5938ab71-ffe2-416d-bc16-4f927dbab94b" containerName="dnsmasq-dns" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.912535 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5938ab71-ffe2-416d-bc16-4f927dbab94b" containerName="dnsmasq-dns" Nov 21 20:26:39 crc kubenswrapper[4727]: E1121 20:26:39.912569 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f246d4ba-7300-45cd-8206-e9e948fdcf52" containerName="glance-httpd" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.912605 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f246d4ba-7300-45cd-8206-e9e948fdcf52" containerName="glance-httpd" Nov 21 20:26:39 crc kubenswrapper[4727]: E1121 20:26:39.912639 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655afa8b-f9c9-44d7-bc05-7b77181a409a" containerName="init" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.912649 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="655afa8b-f9c9-44d7-bc05-7b77181a409a" containerName="init" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.913520 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="655afa8b-f9c9-44d7-bc05-7b77181a409a" containerName="init" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.913610 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f246d4ba-7300-45cd-8206-e9e948fdcf52" containerName="glance-httpd" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.913664 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5938ab71-ffe2-416d-bc16-4f927dbab94b" containerName="dnsmasq-dns" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.913702 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f246d4ba-7300-45cd-8206-e9e948fdcf52" containerName="glance-log" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.918836 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.928786 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.934142 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 21 20:26:39 crc kubenswrapper[4727]: I1121 20:26:39.960515 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.005189 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.019107 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.056922 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.057017 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdbd6\" (UniqueName: \"kubernetes.io/projected/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-kube-api-access-jdbd6\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.057042 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.057167 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.057188 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.058031 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.058367 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.058551 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-logs\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.165539 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-logs\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.165637 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.165673 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdbd6\" (UniqueName: \"kubernetes.io/projected/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-kube-api-access-jdbd6\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.165695 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.165779 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.165801 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.165847 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.165873 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.166278 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.167601 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.167839 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-logs\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.195331 4727 scope.go:117] "RemoveContainer" containerID="f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.201194 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.201901 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.202072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.206144 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.215890 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: E1121 20:26:40.216087 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648\": container with ID starting with f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648 not found: ID does not exist" containerID="f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.216288 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648"} err="failed to get container status \"f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648\": rpc error: code = NotFound desc = could not find container \"f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648\": container with ID starting with f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648 not found: ID does not exist" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.216611 4727 scope.go:117] "RemoveContainer" containerID="f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468" Nov 21 20:26:40 crc kubenswrapper[4727]: E1121 20:26:40.223841 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468\": container with ID starting with f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468 not found: ID does not exist" containerID="f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.224176 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468"} err="failed to get container status \"f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468\": rpc error: code = NotFound desc = could not find container \"f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468\": container with ID starting with f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468 not found: ID does not exist" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.224345 4727 scope.go:117] "RemoveContainer" containerID="f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.224574 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdbd6\" (UniqueName: \"kubernetes.io/projected/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-kube-api-access-jdbd6\") pod \"glance-default-external-api-0\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.323408 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648"} err="failed to get container status \"f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648\": rpc error: code = NotFound desc = could not find container \"f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648\": container with ID starting with f8ce0e95eeba5022062ed1893497b44caa2b6864721ad68f8278550254c13648 not found: ID does not exist" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.323744 4727 scope.go:117] "RemoveContainer" containerID="f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.327773 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468"} err="failed to get container status \"f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468\": rpc error: code = NotFound desc = could not find container \"f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468\": container with ID starting with f3c40600736ea862f9bfd2efac9e285dcc60a5353538cfa983a3194647228468 not found: ID does not exist" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.379608 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.403367 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.581360 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-scripts\") pod \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.581518 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-fernet-keys\") pod \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.581603 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-credential-keys\") pod \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.581651 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-898m2\" (UniqueName: \"kubernetes.io/projected/0429f9d4-473a-40f1-8b52-ff45821ccdd6-kube-api-access-898m2\") pod \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.581700 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-combined-ca-bundle\") pod \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.581729 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-config-data\") pod \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\" (UID: \"0429f9d4-473a-40f1-8b52-ff45821ccdd6\") " Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.595325 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-scripts" (OuterVolumeSpecName: "scripts") pod "0429f9d4-473a-40f1-8b52-ff45821ccdd6" (UID: "0429f9d4-473a-40f1-8b52-ff45821ccdd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.595430 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0429f9d4-473a-40f1-8b52-ff45821ccdd6" (UID: "0429f9d4-473a-40f1-8b52-ff45821ccdd6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.595978 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0429f9d4-473a-40f1-8b52-ff45821ccdd6-kube-api-access-898m2" (OuterVolumeSpecName: "kube-api-access-898m2") pod "0429f9d4-473a-40f1-8b52-ff45821ccdd6" (UID: "0429f9d4-473a-40f1-8b52-ff45821ccdd6"). InnerVolumeSpecName "kube-api-access-898m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.597482 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0429f9d4-473a-40f1-8b52-ff45821ccdd6" (UID: "0429f9d4-473a-40f1-8b52-ff45821ccdd6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.633247 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-config-data" (OuterVolumeSpecName: "config-data") pod "0429f9d4-473a-40f1-8b52-ff45821ccdd6" (UID: "0429f9d4-473a-40f1-8b52-ff45821ccdd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.641074 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0429f9d4-473a-40f1-8b52-ff45821ccdd6" (UID: "0429f9d4-473a-40f1-8b52-ff45821ccdd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.685194 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.685239 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.685249 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.685260 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.685270 4727 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0429f9d4-473a-40f1-8b52-ff45821ccdd6-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.685279 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-898m2\" (UniqueName: \"kubernetes.io/projected/0429f9d4-473a-40f1-8b52-ff45821ccdd6-kube-api-access-898m2\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.806639 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2f2mk"] Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.821410 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2f2mk" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.822298 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2f2mk" event={"ID":"0429f9d4-473a-40f1-8b52-ff45821ccdd6","Type":"ContainerDied","Data":"c3f032f8ff06bc86fd5f3e8f73398ce0a01ec0fc968c6be824f2dfe58b2b9085"} Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.822383 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3f032f8ff06bc86fd5f3e8f73398ce0a01ec0fc968c6be824f2dfe58b2b9085" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.822409 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2f2mk"] Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.857712 4727 generic.go:334] "Generic (PLEG): container finished" podID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" containerID="3af5249f76e41a9396339f9461acde2f7d3cedbc126d949b8c62e3e1594f2da7" exitCode=0 Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.857774 4727 generic.go:334] "Generic (PLEG): container finished" podID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" containerID="b3db77698f15f870651506b220728bb1891ead118900b35a863691dc0b044c3c" exitCode=143 Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.857950 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ff4259-d3c2-43db-9ff7-e420bf11de4e","Type":"ContainerDied","Data":"3af5249f76e41a9396339f9461acde2f7d3cedbc126d949b8c62e3e1594f2da7"} Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.858021 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ff4259-d3c2-43db-9ff7-e420bf11de4e","Type":"ContainerDied","Data":"b3db77698f15f870651506b220728bb1891ead118900b35a863691dc0b044c3c"} Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.866084 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.956430 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5z7h7"] Nov 21 20:26:40 crc kubenswrapper[4727]: E1121 20:26:40.957063 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0429f9d4-473a-40f1-8b52-ff45821ccdd6" containerName="keystone-bootstrap" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.957080 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0429f9d4-473a-40f1-8b52-ff45821ccdd6" containerName="keystone-bootstrap" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.957398 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0429f9d4-473a-40f1-8b52-ff45821ccdd6" containerName="keystone-bootstrap" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.958281 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.962770 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.963119 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xgkf5" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.963139 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.967704 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.967848 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 20:26:40 crc kubenswrapper[4727]: I1121 20:26:40.993128 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5z7h7"] Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.105831 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-scripts\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.105939 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-fernet-keys\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.106410 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdmtl\" (UniqueName: \"kubernetes.io/projected/d06d2665-106c-4478-8621-a196d0267ed5-kube-api-access-bdmtl\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.106534 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-config-data\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.106652 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-credential-keys\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.106787 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-combined-ca-bundle\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.210441 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-credential-keys\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.210494 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-combined-ca-bundle\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.210563 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-scripts\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.210610 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-fernet-keys\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.210730 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdmtl\" (UniqueName: \"kubernetes.io/projected/d06d2665-106c-4478-8621-a196d0267ed5-kube-api-access-bdmtl\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.210760 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-config-data\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.218502 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-combined-ca-bundle\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.224054 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-credential-keys\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.229665 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-scripts\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.230554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-config-data\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.230574 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-fernet-keys\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.231993 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdmtl\" (UniqueName: \"kubernetes.io/projected/d06d2665-106c-4478-8621-a196d0267ed5-kube-api-access-bdmtl\") pod \"keystone-bootstrap-5z7h7\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.290874 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.327655 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.525558 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0429f9d4-473a-40f1-8b52-ff45821ccdd6" path="/var/lib/kubelet/pods/0429f9d4-473a-40f1-8b52-ff45821ccdd6/volumes" Nov 21 20:26:41 crc kubenswrapper[4727]: I1121 20:26:41.526382 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f246d4ba-7300-45cd-8206-e9e948fdcf52" path="/var/lib/kubelet/pods/f246d4ba-7300-45cd-8206-e9e948fdcf52/volumes" Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.136089 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.297262 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kg8vm"] Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.297498 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerName="dnsmasq-dns" containerID="cri-o://fae0c0ac521e31710d16630121b77ae9ac4f430333fabdb8ac71af8ca90d13f3" gracePeriod=10 Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.339731 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.339789 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.339843 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.340775 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12b90de2a7321048685d69f0637a8522048d88e44715706aab3817c22993e4d5"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.340838 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://12b90de2a7321048685d69f0637a8522048d88e44715706aab3817c22993e4d5" gracePeriod=600 Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.903432 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="12b90de2a7321048685d69f0637a8522048d88e44715706aab3817c22993e4d5" exitCode=0 Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.903524 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"12b90de2a7321048685d69f0637a8522048d88e44715706aab3817c22993e4d5"} Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.903881 4727 scope.go:117] "RemoveContainer" containerID="99a25828b83906fc3ad93f0b2554a2773e360925428b620333e0ead4ac93025d" Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.906396 4727 generic.go:334] "Generic (PLEG): container finished" podID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerID="fae0c0ac521e31710d16630121b77ae9ac4f430333fabdb8ac71af8ca90d13f3" exitCode=0 Nov 21 20:26:43 crc kubenswrapper[4727]: I1121 20:26:43.906442 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" event={"ID":"03533ce4-f69e-4a18-8b64-754b2ed7f789","Type":"ContainerDied","Data":"fae0c0ac521e31710d16630121b77ae9ac4f430333fabdb8ac71af8ca90d13f3"} Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.712733 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.875989 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-internal-tls-certs\") pod \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.876037 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-scripts\") pod \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.876103 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-combined-ca-bundle\") pod \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.876137 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-httpd-run\") pod \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.876310 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-config-data\") pod \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.876334 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xn4l\" (UniqueName: \"kubernetes.io/projected/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-kube-api-access-8xn4l\") pod \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.876357 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.876482 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-logs\") pod \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\" (UID: \"c4ff4259-d3c2-43db-9ff7-e420bf11de4e\") " Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.877222 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c4ff4259-d3c2-43db-9ff7-e420bf11de4e" (UID: "c4ff4259-d3c2-43db-9ff7-e420bf11de4e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.877508 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-logs" (OuterVolumeSpecName: "logs") pod "c4ff4259-d3c2-43db-9ff7-e420bf11de4e" (UID: "c4ff4259-d3c2-43db-9ff7-e420bf11de4e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.886251 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "c4ff4259-d3c2-43db-9ff7-e420bf11de4e" (UID: "c4ff4259-d3c2-43db-9ff7-e420bf11de4e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.886983 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-scripts" (OuterVolumeSpecName: "scripts") pod "c4ff4259-d3c2-43db-9ff7-e420bf11de4e" (UID: "c4ff4259-d3c2-43db-9ff7-e420bf11de4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.896018 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-kube-api-access-8xn4l" (OuterVolumeSpecName: "kube-api-access-8xn4l") pod "c4ff4259-d3c2-43db-9ff7-e420bf11de4e" (UID: "c4ff4259-d3c2-43db-9ff7-e420bf11de4e"). InnerVolumeSpecName "kube-api-access-8xn4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.921862 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4ff4259-d3c2-43db-9ff7-e420bf11de4e" (UID: "c4ff4259-d3c2-43db-9ff7-e420bf11de4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.933595 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c4ff4259-d3c2-43db-9ff7-e420bf11de4e" (UID: "c4ff4259-d3c2-43db-9ff7-e420bf11de4e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.951719 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7909040-500b-4a0d-878f-9a4c2d8b6a9b","Type":"ContainerStarted","Data":"f51de53b553b7d36d2741250e5bf65e2f891c498849e7fe820f62447e36bc90b"} Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.954400 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ff4259-d3c2-43db-9ff7-e420bf11de4e","Type":"ContainerDied","Data":"9dbd99875557b675097013af497dc321c9df4800b85d21fbfedd8dc9ea11a980"} Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.954485 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.966153 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-config-data" (OuterVolumeSpecName: "config-data") pod "c4ff4259-d3c2-43db-9ff7-e420bf11de4e" (UID: "c4ff4259-d3c2-43db-9ff7-e420bf11de4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.979096 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.979136 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.979147 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.979155 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.979164 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.979174 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.979182 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xn4l\" (UniqueName: \"kubernetes.io/projected/c4ff4259-d3c2-43db-9ff7-e420bf11de4e-kube-api-access-8xn4l\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:45 crc kubenswrapper[4727]: I1121 20:26:45.979215 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.008139 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.081292 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.292863 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.308979 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.325633 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:26:46 crc kubenswrapper[4727]: E1121 20:26:46.326126 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" containerName="glance-httpd" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.326139 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" containerName="glance-httpd" Nov 21 20:26:46 crc kubenswrapper[4727]: E1121 20:26:46.326152 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" containerName="glance-log" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.326158 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" containerName="glance-log" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.326420 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" containerName="glance-log" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.326437 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" containerName="glance-httpd" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.327992 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.332599 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.332601 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.346762 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.494658 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.494752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.494795 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.494849 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.494984 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh5gt\" (UniqueName: \"kubernetes.io/projected/407cfc42-937a-4e61-b257-042675020db0-kube-api-access-bh5gt\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.495045 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.495101 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-logs\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.495149 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.596607 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.596695 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.596856 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh5gt\" (UniqueName: \"kubernetes.io/projected/407cfc42-937a-4e61-b257-042675020db0-kube-api-access-bh5gt\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.596877 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.596938 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-logs\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.596998 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.597028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.597127 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.598061 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.598168 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.599929 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-logs\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.603806 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.604991 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.606582 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.613087 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.627442 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh5gt\" (UniqueName: \"kubernetes.io/projected/407cfc42-937a-4e61-b257-042675020db0-kube-api-access-bh5gt\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.672871 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:26:46 crc kubenswrapper[4727]: I1121 20:26:46.965043 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 20:26:47 crc kubenswrapper[4727]: I1121 20:26:47.512891 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ff4259-d3c2-43db-9ff7-e420bf11de4e" path="/var/lib/kubelet/pods/c4ff4259-d3c2-43db-9ff7-e420bf11de4e/volumes" Nov 21 20:26:48 crc kubenswrapper[4727]: I1121 20:26:48.360924 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Nov 21 20:26:53 crc kubenswrapper[4727]: I1121 20:26:53.031473 4727 generic.go:334] "Generic (PLEG): container finished" podID="1b0e3c92-f23c-4257-856c-3bb4496913e2" containerID="40579ab4c3190291bb8115f01a56949aa96618d929c80bc83463f5d1fa4c4628" exitCode=0 Nov 21 20:26:53 crc kubenswrapper[4727]: I1121 20:26:53.031546 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2t297" event={"ID":"1b0e3c92-f23c-4257-856c-3bb4496913e2","Type":"ContainerDied","Data":"40579ab4c3190291bb8115f01a56949aa96618d929c80bc83463f5d1fa4c4628"} Nov 21 20:26:56 crc kubenswrapper[4727]: E1121 20:26:56.084618 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 21 20:26:56 crc kubenswrapper[4727]: E1121 20:26:56.085359 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n596h5c5h69h64bh5dch55chcbhcdh9fh64dhc9h5b5h67h545h567h55h64bh64h5f6hd5h579h5fhf9h7chbfh5cdh5c6h696h596h665h67h54cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vncrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ab55a565-2af1-48bb-a31e-d0a8c738912c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:26:58 crc kubenswrapper[4727]: I1121 20:26:58.359995 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: i/o timeout" Nov 21 20:27:03 crc kubenswrapper[4727]: I1121 20:27:03.361425 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: i/o timeout" Nov 21 20:27:03 crc kubenswrapper[4727]: I1121 20:27:03.362314 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:27:05 crc kubenswrapper[4727]: E1121 20:27:05.370844 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 21 20:27:05 crc kubenswrapper[4727]: E1121 20:27:05.371328 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm6kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-nvhzz_openstack(3219ae94-1940-49e8-851c-102a14d22e75): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:27:05 crc kubenswrapper[4727]: E1121 20:27:05.372519 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-nvhzz" podUID="3219ae94-1940-49e8-851c-102a14d22e75" Nov 21 20:27:05 crc kubenswrapper[4727]: E1121 20:27:05.603042 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 21 20:27:05 crc kubenswrapper[4727]: E1121 20:27:05.603444 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-254k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-kzpzx_openstack(ba5291ec-9ad1-4ce3-8794-6b6ca611b277): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:27:05 crc kubenswrapper[4727]: E1121 20:27:05.605087 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-kzpzx" podUID="ba5291ec-9ad1-4ce3-8794-6b6ca611b277" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.631215 4727 scope.go:117] "RemoveContainer" containerID="3af5249f76e41a9396339f9461acde2f7d3cedbc126d949b8c62e3e1594f2da7" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.730967 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.739042 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2t297" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.748978 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9lt8\" (UniqueName: \"kubernetes.io/projected/03533ce4-f69e-4a18-8b64-754b2ed7f789-kube-api-access-c9lt8\") pod \"03533ce4-f69e-4a18-8b64-754b2ed7f789\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.749030 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-config\") pod \"03533ce4-f69e-4a18-8b64-754b2ed7f789\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.749262 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-nb\") pod \"03533ce4-f69e-4a18-8b64-754b2ed7f789\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.749365 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-dns-svc\") pod \"03533ce4-f69e-4a18-8b64-754b2ed7f789\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.749410 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-sb\") pod \"03533ce4-f69e-4a18-8b64-754b2ed7f789\" (UID: \"03533ce4-f69e-4a18-8b64-754b2ed7f789\") " Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.769607 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03533ce4-f69e-4a18-8b64-754b2ed7f789-kube-api-access-c9lt8" (OuterVolumeSpecName: "kube-api-access-c9lt8") pod "03533ce4-f69e-4a18-8b64-754b2ed7f789" (UID: "03533ce4-f69e-4a18-8b64-754b2ed7f789"). InnerVolumeSpecName "kube-api-access-c9lt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.822140 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03533ce4-f69e-4a18-8b64-754b2ed7f789" (UID: "03533ce4-f69e-4a18-8b64-754b2ed7f789"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.849505 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-config" (OuterVolumeSpecName: "config") pod "03533ce4-f69e-4a18-8b64-754b2ed7f789" (UID: "03533ce4-f69e-4a18-8b64-754b2ed7f789"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.852599 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-config\") pod \"1b0e3c92-f23c-4257-856c-3bb4496913e2\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.858770 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-combined-ca-bundle\") pod \"1b0e3c92-f23c-4257-856c-3bb4496913e2\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.859094 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-229rf\" (UniqueName: \"kubernetes.io/projected/1b0e3c92-f23c-4257-856c-3bb4496913e2-kube-api-access-229rf\") pod \"1b0e3c92-f23c-4257-856c-3bb4496913e2\" (UID: \"1b0e3c92-f23c-4257-856c-3bb4496913e2\") " Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.865946 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b0e3c92-f23c-4257-856c-3bb4496913e2-kube-api-access-229rf" (OuterVolumeSpecName: "kube-api-access-229rf") pod "1b0e3c92-f23c-4257-856c-3bb4496913e2" (UID: "1b0e3c92-f23c-4257-856c-3bb4496913e2"). InnerVolumeSpecName "kube-api-access-229rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.866914 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.866938 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-229rf\" (UniqueName: \"kubernetes.io/projected/1b0e3c92-f23c-4257-856c-3bb4496913e2-kube-api-access-229rf\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.867011 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9lt8\" (UniqueName: \"kubernetes.io/projected/03533ce4-f69e-4a18-8b64-754b2ed7f789-kube-api-access-c9lt8\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.867027 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.872355 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03533ce4-f69e-4a18-8b64-754b2ed7f789" (UID: "03533ce4-f69e-4a18-8b64-754b2ed7f789"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.887827 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-config" (OuterVolumeSpecName: "config") pod "1b0e3c92-f23c-4257-856c-3bb4496913e2" (UID: "1b0e3c92-f23c-4257-856c-3bb4496913e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.893816 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03533ce4-f69e-4a18-8b64-754b2ed7f789" (UID: "03533ce4-f69e-4a18-8b64-754b2ed7f789"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.902715 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b0e3c92-f23c-4257-856c-3bb4496913e2" (UID: "1b0e3c92-f23c-4257-856c-3bb4496913e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.968539 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.968824 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.968836 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03533ce4-f69e-4a18-8b64-754b2ed7f789-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:05 crc kubenswrapper[4727]: I1121 20:27:05.968845 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1b0e3c92-f23c-4257-856c-3bb4496913e2-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:06 crc kubenswrapper[4727]: I1121 20:27:06.179850 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2t297" event={"ID":"1b0e3c92-f23c-4257-856c-3bb4496913e2","Type":"ContainerDied","Data":"c9d9343e58b9312a6c956a58c867346ccf0ef08beac27e1e07db4154a111f572"} Nov 21 20:27:06 crc kubenswrapper[4727]: I1121 20:27:06.179909 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9d9343e58b9312a6c956a58c867346ccf0ef08beac27e1e07db4154a111f572" Nov 21 20:27:06 crc kubenswrapper[4727]: I1121 20:27:06.179867 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2t297" Nov 21 20:27:06 crc kubenswrapper[4727]: I1121 20:27:06.185046 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" event={"ID":"03533ce4-f69e-4a18-8b64-754b2ed7f789","Type":"ContainerDied","Data":"5690b1528f414fa3630e617530a3081e873080b52d4add019205cf8c5ac1cedd"} Nov 21 20:27:06 crc kubenswrapper[4727]: I1121 20:27:06.185394 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" Nov 21 20:27:06 crc kubenswrapper[4727]: E1121 20:27:06.195677 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-kzpzx" podUID="ba5291ec-9ad1-4ce3-8794-6b6ca611b277" Nov 21 20:27:06 crc kubenswrapper[4727]: E1121 20:27:06.196005 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-nvhzz" podUID="3219ae94-1940-49e8-851c-102a14d22e75" Nov 21 20:27:06 crc kubenswrapper[4727]: I1121 20:27:06.231777 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kg8vm"] Nov 21 20:27:06 crc kubenswrapper[4727]: I1121 20:27:06.242620 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-kg8vm"] Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.054574 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-tqzmz"] Nov 21 20:27:07 crc kubenswrapper[4727]: E1121 20:27:07.055315 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0e3c92-f23c-4257-856c-3bb4496913e2" containerName="neutron-db-sync" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.055355 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0e3c92-f23c-4257-856c-3bb4496913e2" containerName="neutron-db-sync" Nov 21 20:27:07 crc kubenswrapper[4727]: E1121 20:27:07.055467 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerName="dnsmasq-dns" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.055479 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerName="dnsmasq-dns" Nov 21 20:27:07 crc kubenswrapper[4727]: E1121 20:27:07.055498 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerName="init" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.055510 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerName="init" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.055706 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0e3c92-f23c-4257-856c-3bb4496913e2" containerName="neutron-db-sync" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.055729 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerName="dnsmasq-dns" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.056828 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.104219 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.104272 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-svc\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.104331 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.104368 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-config\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.104394 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.104469 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8zvn\" (UniqueName: \"kubernetes.io/projected/4914babf-4a62-47e3-a89e-cb3faff1a26b-kube-api-access-t8zvn\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.118155 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-tqzmz"] Nov 21 20:27:07 crc kubenswrapper[4727]: E1121 20:27:07.131106 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 21 20:27:07 crc kubenswrapper[4727]: E1121 20:27:07.131287 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbnkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kjphx_openstack(f819a7d6-b6ab-409c-aaf2-e5044d9317d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 20:27:07 crc kubenswrapper[4727]: E1121 20:27:07.132592 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kjphx" podUID="f819a7d6-b6ab-409c-aaf2-e5044d9317d5" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.206145 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8zvn\" (UniqueName: \"kubernetes.io/projected/4914babf-4a62-47e3-a89e-cb3faff1a26b-kube-api-access-t8zvn\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.206321 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.206357 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-svc\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.206411 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.206447 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-config\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.206475 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.207849 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.208658 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.211205 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-svc\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.213033 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8476b67874-f2dtk"] Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.213504 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.214072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-config\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.215007 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.220256 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.225107 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.225287 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.225455 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-drn42" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.235779 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8zvn\" (UniqueName: \"kubernetes.io/projected/4914babf-4a62-47e3-a89e-cb3faff1a26b-kube-api-access-t8zvn\") pod \"dnsmasq-dns-6b7b667979-tqzmz\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.236440 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8476b67874-f2dtk"] Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.277752 4727 scope.go:117] "RemoveContainer" containerID="b3db77698f15f870651506b220728bb1891ead118900b35a863691dc0b044c3c" Nov 21 20:27:07 crc kubenswrapper[4727]: E1121 20:27:07.278074 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-kjphx" podUID="f819a7d6-b6ab-409c-aaf2-e5044d9317d5" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.284807 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.315194 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-combined-ca-bundle\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.315318 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-ovndb-tls-certs\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.315367 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-config\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.315446 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kdk8\" (UniqueName: \"kubernetes.io/projected/b4d6995f-166c-410a-adee-3733a25c28df-kube-api-access-4kdk8\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.315505 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-httpd-config\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.417355 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-combined-ca-bundle\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.417796 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-ovndb-tls-certs\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.417843 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-config\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.417900 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kdk8\" (UniqueName: \"kubernetes.io/projected/b4d6995f-166c-410a-adee-3733a25c28df-kube-api-access-4kdk8\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.417925 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-httpd-config\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.422611 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-combined-ca-bundle\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.423492 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-config\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.426883 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-httpd-config\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.429777 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-ovndb-tls-certs\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.444617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kdk8\" (UniqueName: \"kubernetes.io/projected/b4d6995f-166c-410a-adee-3733a25c28df-kube-api-access-4kdk8\") pod \"neutron-8476b67874-f2dtk\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.517437 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" path="/var/lib/kubelet/pods/03533ce4-f69e-4a18-8b64-754b2ed7f789/volumes" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.601565 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.704094 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5z7h7"] Nov 21 20:27:07 crc kubenswrapper[4727]: I1121 20:27:07.979674 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:27:08 crc kubenswrapper[4727]: I1121 20:27:08.018299 4727 scope.go:117] "RemoveContainer" containerID="fae0c0ac521e31710d16630121b77ae9ac4f430333fabdb8ac71af8ca90d13f3" Nov 21 20:27:08 crc kubenswrapper[4727]: I1121 20:27:08.262828 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"407cfc42-937a-4e61-b257-042675020db0","Type":"ContainerStarted","Data":"cc140ac5fa13e63ad5f9c64b07ff0df42ff769c15f28d1ff3a3fd2c5a42c2a7b"} Nov 21 20:27:08 crc kubenswrapper[4727]: I1121 20:27:08.263315 4727 scope.go:117] "RemoveContainer" containerID="a515b1751c90c0c2004c2cd74dc7c6abcc533f0a38fc3a43101c0a0aac29e16f" Nov 21 20:27:08 crc kubenswrapper[4727]: I1121 20:27:08.277128 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5z7h7" event={"ID":"d06d2665-106c-4478-8621-a196d0267ed5","Type":"ContainerStarted","Data":"1301fbaf5aa0356e4e40ca1144bc16ced4eea157f732479ff74d099cfc63a68b"} Nov 21 20:27:08 crc kubenswrapper[4727]: I1121 20:27:08.362834 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-kg8vm" podUID="03533ce4-f69e-4a18-8b64-754b2ed7f789" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: i/o timeout" Nov 21 20:27:08 crc kubenswrapper[4727]: I1121 20:27:08.757753 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-tqzmz"] Nov 21 20:27:08 crc kubenswrapper[4727]: I1121 20:27:08.828000 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8476b67874-f2dtk"] Nov 21 20:27:08 crc kubenswrapper[4727]: W1121 20:27:08.874374 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4914babf_4a62_47e3_a89e_cb3faff1a26b.slice/crio-1368bf28777697609762bfa841ed5d786356d207fd71575d751cd7c62b4beac8 WatchSource:0}: Error finding container 1368bf28777697609762bfa841ed5d786356d207fd71575d751cd7c62b4beac8: Status 404 returned error can't find the container with id 1368bf28777697609762bfa841ed5d786356d207fd71575d751cd7c62b4beac8 Nov 21 20:27:08 crc kubenswrapper[4727]: W1121 20:27:08.875539 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4d6995f_166c_410a_adee_3733a25c28df.slice/crio-289f70c671ce4d1159ffaba84f9a15426cf2fb0fe16e2278cf9db832c919720a WatchSource:0}: Error finding container 289f70c671ce4d1159ffaba84f9a15426cf2fb0fe16e2278cf9db832c919720a: Status 404 returned error can't find the container with id 289f70c671ce4d1159ffaba84f9a15426cf2fb0fe16e2278cf9db832c919720a Nov 21 20:27:09 crc kubenswrapper[4727]: I1121 20:27:09.311048 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" event={"ID":"4914babf-4a62-47e3-a89e-cb3faff1a26b","Type":"ContainerStarted","Data":"1368bf28777697609762bfa841ed5d786356d207fd71575d751cd7c62b4beac8"} Nov 21 20:27:09 crc kubenswrapper[4727]: I1121 20:27:09.321097 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5x7sl" event={"ID":"6f8801f4-f168-4ae7-b364-cd95a72b3a66","Type":"ContainerStarted","Data":"c5301f9b0f9fa13f65e0573428c3cbfa151b7954125dddc0cc5e98f4c7b39039"} Nov 21 20:27:09 crc kubenswrapper[4727]: I1121 20:27:09.332466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5z7h7" event={"ID":"d06d2665-106c-4478-8621-a196d0267ed5","Type":"ContainerStarted","Data":"173c40fece6202378235c005769078f5a05c9488e0041f7d20fd1dd997109f34"} Nov 21 20:27:09 crc kubenswrapper[4727]: I1121 20:27:09.342721 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5x7sl" podStartSLOduration=6.791173299 podStartE2EDuration="37.342679106s" podCreationTimestamp="2025-11-21 20:26:32 +0000 UTC" firstStartedPulling="2025-11-21 20:26:35.079700082 +0000 UTC m=+1200.265885126" lastFinishedPulling="2025-11-21 20:27:05.631205889 +0000 UTC m=+1230.817390933" observedRunningTime="2025-11-21 20:27:09.341860706 +0000 UTC m=+1234.528045750" watchObservedRunningTime="2025-11-21 20:27:09.342679106 +0000 UTC m=+1234.528864150" Nov 21 20:27:09 crc kubenswrapper[4727]: I1121 20:27:09.343344 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab55a565-2af1-48bb-a31e-d0a8c738912c","Type":"ContainerStarted","Data":"484eed23344aaf217be1c9fcd736765c9259cbacd696b8cdc3e0ce520f3d6db2"} Nov 21 20:27:09 crc kubenswrapper[4727]: I1121 20:27:09.345323 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8476b67874-f2dtk" event={"ID":"b4d6995f-166c-410a-adee-3733a25c28df","Type":"ContainerStarted","Data":"289f70c671ce4d1159ffaba84f9a15426cf2fb0fe16e2278cf9db832c919720a"} Nov 21 20:27:09 crc kubenswrapper[4727]: I1121 20:27:09.351021 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"5c81a93a448fa0edbcf26720ad6bed9f6ecbbf23562d69ff342656c8199e62de"} Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.110222 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d66895777-vztk9"] Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.113567 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.123720 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.124535 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.154321 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-httpd-config\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.154543 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-ovndb-tls-certs\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.154654 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmdkt\" (UniqueName: \"kubernetes.io/projected/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-kube-api-access-gmdkt\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.154745 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-public-tls-certs\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.154842 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-config\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.154941 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-combined-ca-bundle\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.155040 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-internal-tls-certs\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.174331 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d66895777-vztk9"] Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.256603 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-config\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.256684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-combined-ca-bundle\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.256717 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-internal-tls-certs\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.256774 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-httpd-config\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.256808 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-ovndb-tls-certs\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.256864 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmdkt\" (UniqueName: \"kubernetes.io/projected/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-kube-api-access-gmdkt\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.256890 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-public-tls-certs\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.298171 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-public-tls-certs\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.313581 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-httpd-config\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.314382 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-config\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.314764 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-combined-ca-bundle\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.320767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-ovndb-tls-certs\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.341344 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmdkt\" (UniqueName: \"kubernetes.io/projected/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-kube-api-access-gmdkt\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.342179 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984-internal-tls-certs\") pod \"neutron-7d66895777-vztk9\" (UID: \"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984\") " pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.442629 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7909040-500b-4a0d-878f-9a4c2d8b6a9b","Type":"ContainerStarted","Data":"7bfec23f6aa8aebd03ea88d77fdaa02c68e0dfb290cd68fc9d15e404684d0cc4"} Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.485633 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.491094 4727 generic.go:334] "Generic (PLEG): container finished" podID="4914babf-4a62-47e3-a89e-cb3faff1a26b" containerID="f8d67b027cf2b9481c9e6fd0062ab720daee9692b833c7907c57cb403101d743" exitCode=0 Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.491266 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" event={"ID":"4914babf-4a62-47e3-a89e-cb3faff1a26b","Type":"ContainerDied","Data":"f8d67b027cf2b9481c9e6fd0062ab720daee9692b833c7907c57cb403101d743"} Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.547887 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"407cfc42-937a-4e61-b257-042675020db0","Type":"ContainerStarted","Data":"6d9f218a85ef4f630e3c8e515cb83e57b79860124f0dea59b6d3cbf362fe1094"} Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.562213 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8476b67874-f2dtk" event={"ID":"b4d6995f-166c-410a-adee-3733a25c28df","Type":"ContainerStarted","Data":"364f5a4a47bb2ca8f0327214f490df2d67e76c1d1aa242e30cd8041e08a9bad4"} Nov 21 20:27:10 crc kubenswrapper[4727]: I1121 20:27:10.657609 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5z7h7" podStartSLOduration=30.657582635 podStartE2EDuration="30.657582635s" podCreationTimestamp="2025-11-21 20:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:10.628188152 +0000 UTC m=+1235.814373196" watchObservedRunningTime="2025-11-21 20:27:10.657582635 +0000 UTC m=+1235.843767679" Nov 21 20:27:11 crc kubenswrapper[4727]: I1121 20:27:11.312417 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d66895777-vztk9"] Nov 21 20:27:11 crc kubenswrapper[4727]: W1121 20:27:11.328197 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a2d9ca1_f8a7_4dbf_be6d_de4ed17dd984.slice/crio-f54c6d475de0b67bd219d098032d07231795d8e5013e075ebf8c2798a6447818 WatchSource:0}: Error finding container f54c6d475de0b67bd219d098032d07231795d8e5013e075ebf8c2798a6447818: Status 404 returned error can't find the container with id f54c6d475de0b67bd219d098032d07231795d8e5013e075ebf8c2798a6447818 Nov 21 20:27:11 crc kubenswrapper[4727]: I1121 20:27:11.584798 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d66895777-vztk9" event={"ID":"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984","Type":"ContainerStarted","Data":"f54c6d475de0b67bd219d098032d07231795d8e5013e075ebf8c2798a6447818"} Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.647196 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8476b67874-f2dtk" event={"ID":"b4d6995f-166c-410a-adee-3733a25c28df","Type":"ContainerStarted","Data":"dfe330a320ec02ed64a189754512202a3865e294e595fac2ad0da8d964dd0603"} Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.647772 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.652396 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d66895777-vztk9" event={"ID":"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984","Type":"ContainerStarted","Data":"96c49d8092ab0d1d7876d8571d9d44fc9299019e2fce5193ba9646f87d0b5338"} Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.652432 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d66895777-vztk9" event={"ID":"0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984","Type":"ContainerStarted","Data":"3135fcb276079321c21983495481056f523db115b6a670a8153fb5e1d7a029cf"} Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.653087 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.654647 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7909040-500b-4a0d-878f-9a4c2d8b6a9b","Type":"ContainerStarted","Data":"b07957761d5ecd2b6177d21677c95483a2b9b6b460eef9fa8fa7cdddf2eeecf9"} Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.657869 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" event={"ID":"4914babf-4a62-47e3-a89e-cb3faff1a26b","Type":"ContainerStarted","Data":"6493e474cea267cbeabb363673ad40059d815601c51c951e0d2a448a5eafed40"} Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.658047 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.661625 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"407cfc42-937a-4e61-b257-042675020db0","Type":"ContainerStarted","Data":"5e540f8a1538066950326c07069aa0bf770f19fd10b217f424b71c2982dd7bfb"} Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.677772 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8476b67874-f2dtk" podStartSLOduration=8.677754569 podStartE2EDuration="8.677754569s" podCreationTimestamp="2025-11-21 20:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:15.666463285 +0000 UTC m=+1240.852648319" watchObservedRunningTime="2025-11-21 20:27:15.677754569 +0000 UTC m=+1240.863939613" Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.691915 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d66895777-vztk9" podStartSLOduration=5.691893532 podStartE2EDuration="5.691893532s" podCreationTimestamp="2025-11-21 20:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:15.68815312 +0000 UTC m=+1240.874338174" watchObservedRunningTime="2025-11-21 20:27:15.691893532 +0000 UTC m=+1240.878078576" Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.727353 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=29.72733345 podStartE2EDuration="29.72733345s" podCreationTimestamp="2025-11-21 20:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:15.720840133 +0000 UTC m=+1240.907025307" watchObservedRunningTime="2025-11-21 20:27:15.72733345 +0000 UTC m=+1240.913518494" Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.747036 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" podStartSLOduration=8.747019488 podStartE2EDuration="8.747019488s" podCreationTimestamp="2025-11-21 20:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:15.737137038 +0000 UTC m=+1240.923322082" watchObservedRunningTime="2025-11-21 20:27:15.747019488 +0000 UTC m=+1240.933204532" Nov 21 20:27:15 crc kubenswrapper[4727]: I1121 20:27:15.765470 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=36.765450464 podStartE2EDuration="36.765450464s" podCreationTimestamp="2025-11-21 20:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:15.760023263 +0000 UTC m=+1240.946208307" watchObservedRunningTime="2025-11-21 20:27:15.765450464 +0000 UTC m=+1240.951635508" Nov 21 20:27:16 crc kubenswrapper[4727]: I1121 20:27:16.965559 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 20:27:16 crc kubenswrapper[4727]: I1121 20:27:16.965871 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 20:27:16 crc kubenswrapper[4727]: I1121 20:27:16.965887 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 20:27:16 crc kubenswrapper[4727]: I1121 20:27:16.965897 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 20:27:17 crc kubenswrapper[4727]: I1121 20:27:17.100201 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 20:27:17 crc kubenswrapper[4727]: I1121 20:27:17.100258 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 20:27:18 crc kubenswrapper[4727]: I1121 20:27:18.705262 4727 generic.go:334] "Generic (PLEG): container finished" podID="d06d2665-106c-4478-8621-a196d0267ed5" containerID="173c40fece6202378235c005769078f5a05c9488e0041f7d20fd1dd997109f34" exitCode=0 Nov 21 20:27:18 crc kubenswrapper[4727]: I1121 20:27:18.705384 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5z7h7" event={"ID":"d06d2665-106c-4478-8621-a196d0267ed5","Type":"ContainerDied","Data":"173c40fece6202378235c005769078f5a05c9488e0041f7d20fd1dd997109f34"} Nov 21 20:27:20 crc kubenswrapper[4727]: I1121 20:27:20.381119 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 20:27:20 crc kubenswrapper[4727]: I1121 20:27:20.381788 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 20:27:20 crc kubenswrapper[4727]: I1121 20:27:20.418200 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 20:27:20 crc kubenswrapper[4727]: I1121 20:27:20.437086 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 20:27:20 crc kubenswrapper[4727]: I1121 20:27:20.725703 4727 generic.go:334] "Generic (PLEG): container finished" podID="6f8801f4-f168-4ae7-b364-cd95a72b3a66" containerID="c5301f9b0f9fa13f65e0573428c3cbfa151b7954125dddc0cc5e98f4c7b39039" exitCode=0 Nov 21 20:27:20 crc kubenswrapper[4727]: I1121 20:27:20.727524 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5x7sl" event={"ID":"6f8801f4-f168-4ae7-b364-cd95a72b3a66","Type":"ContainerDied","Data":"c5301f9b0f9fa13f65e0573428c3cbfa151b7954125dddc0cc5e98f4c7b39039"} Nov 21 20:27:20 crc kubenswrapper[4727]: I1121 20:27:20.727751 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 20:27:20 crc kubenswrapper[4727]: I1121 20:27:20.727830 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 20:27:22 crc kubenswrapper[4727]: I1121 20:27:22.290110 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:22 crc kubenswrapper[4727]: I1121 20:27:22.365787 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-xl6pb"] Nov 21 20:27:22 crc kubenswrapper[4727]: I1121 20:27:22.366072 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" podUID="f634df90-2dae-4e5b-b938-38d5d75f9b00" containerName="dnsmasq-dns" containerID="cri-o://b51e0f2fdf2ea5f7d1e198af97b55aea16e4bfc27571067c43c09dbaa7074e36" gracePeriod=10 Nov 21 20:27:22 crc kubenswrapper[4727]: I1121 20:27:22.769550 4727 generic.go:334] "Generic (PLEG): container finished" podID="f634df90-2dae-4e5b-b938-38d5d75f9b00" containerID="b51e0f2fdf2ea5f7d1e198af97b55aea16e4bfc27571067c43c09dbaa7074e36" exitCode=0 Nov 21 20:27:22 crc kubenswrapper[4727]: I1121 20:27:22.769640 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" event={"ID":"f634df90-2dae-4e5b-b938-38d5d75f9b00","Type":"ContainerDied","Data":"b51e0f2fdf2ea5f7d1e198af97b55aea16e4bfc27571067c43c09dbaa7074e36"} Nov 21 20:27:22 crc kubenswrapper[4727]: I1121 20:27:22.770104 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:27:22 crc kubenswrapper[4727]: I1121 20:27:22.770119 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.136232 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" podUID="f634df90-2dae-4e5b-b938-38d5d75f9b00" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.182:5353: connect: connection refused" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.631771 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5x7sl" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.638320 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.702234 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-combined-ca-bundle\") pod \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.702498 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8801f4-f168-4ae7-b364-cd95a72b3a66-logs\") pod \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.702567 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-scripts\") pod \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.702663 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5gxx\" (UniqueName: \"kubernetes.io/projected/6f8801f4-f168-4ae7-b364-cd95a72b3a66-kube-api-access-d5gxx\") pod \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.702944 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-config-data\") pod \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\" (UID: \"6f8801f4-f168-4ae7-b364-cd95a72b3a66\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.721225 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f8801f4-f168-4ae7-b364-cd95a72b3a66-logs" (OuterVolumeSpecName: "logs") pod "6f8801f4-f168-4ae7-b364-cd95a72b3a66" (UID: "6f8801f4-f168-4ae7-b364-cd95a72b3a66"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.727942 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8801f4-f168-4ae7-b364-cd95a72b3a66-kube-api-access-d5gxx" (OuterVolumeSpecName: "kube-api-access-d5gxx") pod "6f8801f4-f168-4ae7-b364-cd95a72b3a66" (UID: "6f8801f4-f168-4ae7-b364-cd95a72b3a66"). InnerVolumeSpecName "kube-api-access-d5gxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.735190 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-scripts" (OuterVolumeSpecName: "scripts") pod "6f8801f4-f168-4ae7-b364-cd95a72b3a66" (UID: "6f8801f4-f168-4ae7-b364-cd95a72b3a66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.801949 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f8801f4-f168-4ae7-b364-cd95a72b3a66" (UID: "6f8801f4-f168-4ae7-b364-cd95a72b3a66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.802469 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5x7sl" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.802091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5x7sl" event={"ID":"6f8801f4-f168-4ae7-b364-cd95a72b3a66","Type":"ContainerDied","Data":"2f89c21de83c586d7260923a3c9651f6f6e0017f84cb1aae50a7808d8e72fa03"} Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.802759 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f89c21de83c586d7260923a3c9651f6f6e0017f84cb1aae50a7808d8e72fa03" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.805859 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-fernet-keys\") pod \"d06d2665-106c-4478-8621-a196d0267ed5\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.805903 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdmtl\" (UniqueName: \"kubernetes.io/projected/d06d2665-106c-4478-8621-a196d0267ed5-kube-api-access-bdmtl\") pod \"d06d2665-106c-4478-8621-a196d0267ed5\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.805938 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-credential-keys\") pod \"d06d2665-106c-4478-8621-a196d0267ed5\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.806051 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-combined-ca-bundle\") pod \"d06d2665-106c-4478-8621-a196d0267ed5\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.806215 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-config-data\") pod \"d06d2665-106c-4478-8621-a196d0267ed5\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.806251 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-scripts\") pod \"d06d2665-106c-4478-8621-a196d0267ed5\" (UID: \"d06d2665-106c-4478-8621-a196d0267ed5\") " Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.806718 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.806733 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8801f4-f168-4ae7-b364-cd95a72b3a66-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.806745 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.806754 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5gxx\" (UniqueName: \"kubernetes.io/projected/6f8801f4-f168-4ae7-b364-cd95a72b3a66-kube-api-access-d5gxx\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.806820 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-config-data" (OuterVolumeSpecName: "config-data") pod "6f8801f4-f168-4ae7-b364-cd95a72b3a66" (UID: "6f8801f4-f168-4ae7-b364-cd95a72b3a66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.811567 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5z7h7" event={"ID":"d06d2665-106c-4478-8621-a196d0267ed5","Type":"ContainerDied","Data":"1301fbaf5aa0356e4e40ca1144bc16ced4eea157f732479ff74d099cfc63a68b"} Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.811671 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1301fbaf5aa0356e4e40ca1144bc16ced4eea157f732479ff74d099cfc63a68b" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.811748 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5z7h7" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.814040 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d06d2665-106c-4478-8621-a196d0267ed5" (UID: "d06d2665-106c-4478-8621-a196d0267ed5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.817382 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06d2665-106c-4478-8621-a196d0267ed5-kube-api-access-bdmtl" (OuterVolumeSpecName: "kube-api-access-bdmtl") pod "d06d2665-106c-4478-8621-a196d0267ed5" (UID: "d06d2665-106c-4478-8621-a196d0267ed5"). InnerVolumeSpecName "kube-api-access-bdmtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.832275 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d06d2665-106c-4478-8621-a196d0267ed5" (UID: "d06d2665-106c-4478-8621-a196d0267ed5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.832308 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-scripts" (OuterVolumeSpecName: "scripts") pod "d06d2665-106c-4478-8621-a196d0267ed5" (UID: "d06d2665-106c-4478-8621-a196d0267ed5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.848041 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-config-data" (OuterVolumeSpecName: "config-data") pod "d06d2665-106c-4478-8621-a196d0267ed5" (UID: "d06d2665-106c-4478-8621-a196d0267ed5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.872972 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d06d2665-106c-4478-8621-a196d0267ed5" (UID: "d06d2665-106c-4478-8621-a196d0267ed5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.910380 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.910419 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8801f4-f168-4ae7-b364-cd95a72b3a66-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.910427 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.910437 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.910446 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.910455 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdmtl\" (UniqueName: \"kubernetes.io/projected/d06d2665-106c-4478-8621-a196d0267ed5-kube-api-access-bdmtl\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:23 crc kubenswrapper[4727]: I1121 20:27:23.910463 4727 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d06d2665-106c-4478-8621-a196d0267ed5-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.125793 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.218155 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-sb\") pod \"f634df90-2dae-4e5b-b938-38d5d75f9b00\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.218215 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-nb\") pod \"f634df90-2dae-4e5b-b938-38d5d75f9b00\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.218261 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-svc\") pod \"f634df90-2dae-4e5b-b938-38d5d75f9b00\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.218284 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-config\") pod \"f634df90-2dae-4e5b-b938-38d5d75f9b00\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.218403 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lnld\" (UniqueName: \"kubernetes.io/projected/f634df90-2dae-4e5b-b938-38d5d75f9b00-kube-api-access-6lnld\") pod \"f634df90-2dae-4e5b-b938-38d5d75f9b00\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.218593 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-swift-storage-0\") pod \"f634df90-2dae-4e5b-b938-38d5d75f9b00\" (UID: \"f634df90-2dae-4e5b-b938-38d5d75f9b00\") " Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.236383 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f634df90-2dae-4e5b-b938-38d5d75f9b00-kube-api-access-6lnld" (OuterVolumeSpecName: "kube-api-access-6lnld") pod "f634df90-2dae-4e5b-b938-38d5d75f9b00" (UID: "f634df90-2dae-4e5b-b938-38d5d75f9b00"). InnerVolumeSpecName "kube-api-access-6lnld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.320859 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lnld\" (UniqueName: \"kubernetes.io/projected/f634df90-2dae-4e5b-b938-38d5d75f9b00-kube-api-access-6lnld\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.550536 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f634df90-2dae-4e5b-b938-38d5d75f9b00" (UID: "f634df90-2dae-4e5b-b938-38d5d75f9b00"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.556618 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f634df90-2dae-4e5b-b938-38d5d75f9b00" (UID: "f634df90-2dae-4e5b-b938-38d5d75f9b00"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.571340 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-config" (OuterVolumeSpecName: "config") pod "f634df90-2dae-4e5b-b938-38d5d75f9b00" (UID: "f634df90-2dae-4e5b-b938-38d5d75f9b00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.574355 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f634df90-2dae-4e5b-b938-38d5d75f9b00" (UID: "f634df90-2dae-4e5b-b938-38d5d75f9b00"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.588379 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.588492 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.590199 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f634df90-2dae-4e5b-b938-38d5d75f9b00" (UID: "f634df90-2dae-4e5b-b938-38d5d75f9b00"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.609402 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.626996 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.627210 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.627315 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.627373 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.627430 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f634df90-2dae-4e5b-b938-38d5d75f9b00-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.647263 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.812693 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-59895c4888-ffr5c"] Nov 21 20:27:24 crc kubenswrapper[4727]: E1121 20:27:24.819253 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f634df90-2dae-4e5b-b938-38d5d75f9b00" containerName="dnsmasq-dns" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.819283 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f634df90-2dae-4e5b-b938-38d5d75f9b00" containerName="dnsmasq-dns" Nov 21 20:27:24 crc kubenswrapper[4727]: E1121 20:27:24.819296 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06d2665-106c-4478-8621-a196d0267ed5" containerName="keystone-bootstrap" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.819303 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06d2665-106c-4478-8621-a196d0267ed5" containerName="keystone-bootstrap" Nov 21 20:27:24 crc kubenswrapper[4727]: E1121 20:27:24.819321 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f634df90-2dae-4e5b-b938-38d5d75f9b00" containerName="init" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.819327 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f634df90-2dae-4e5b-b938-38d5d75f9b00" containerName="init" Nov 21 20:27:24 crc kubenswrapper[4727]: E1121 20:27:24.819350 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8801f4-f168-4ae7-b364-cd95a72b3a66" containerName="placement-db-sync" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.819357 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8801f4-f168-4ae7-b364-cd95a72b3a66" containerName="placement-db-sync" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.819617 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8801f4-f168-4ae7-b364-cd95a72b3a66" containerName="placement-db-sync" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.819637 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f634df90-2dae-4e5b-b938-38d5d75f9b00" containerName="dnsmasq-dns" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.819658 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d06d2665-106c-4478-8621-a196d0267ed5" containerName="keystone-bootstrap" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.820806 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.830166 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.830224 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.830569 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.830710 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rgmh5" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.830828 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.832286 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59895c4888-ffr5c"] Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.869114 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nvhzz" event={"ID":"3219ae94-1940-49e8-851c-102a14d22e75","Type":"ContainerStarted","Data":"54520a982d3c60ed9d3e47f28be39eea2ecd8ee935c5eea737bd465ccd0236b4"} Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.879156 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab55a565-2af1-48bb-a31e-d0a8c738912c","Type":"ContainerStarted","Data":"3d968bc03dd61485ba8e3844312dbc1b38cdc9d8aa3444b3a7d1a559b854f939"} Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.882034 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kzpzx" event={"ID":"ba5291ec-9ad1-4ce3-8794-6b6ca611b277","Type":"ContainerStarted","Data":"f27e6c54022e519d0083b15f3850f40204973d1d4c1fb611daeda5ce2ba58dc6"} Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.892449 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cc454d9c9-g5hjz"] Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.894090 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.894768 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.894812 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-xl6pb" event={"ID":"f634df90-2dae-4e5b-b938-38d5d75f9b00","Type":"ContainerDied","Data":"dca2988eaa9e7b9874f130892761533e5436901e9ef4bc1b70442345163a6459"} Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.894840 4727 scope.go:117] "RemoveContainer" containerID="b51e0f2fdf2ea5f7d1e198af97b55aea16e4bfc27571067c43c09dbaa7074e36" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.900943 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.901346 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.901672 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.902451 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.902578 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xgkf5" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.902732 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.920911 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cc454d9c9-g5hjz"] Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.923944 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-nvhzz" podStartSLOduration=4.569616668 podStartE2EDuration="53.923925268s" podCreationTimestamp="2025-11-21 20:26:31 +0000 UTC" firstStartedPulling="2025-11-21 20:26:34.344175028 +0000 UTC m=+1199.530360072" lastFinishedPulling="2025-11-21 20:27:23.698483618 +0000 UTC m=+1248.884668672" observedRunningTime="2025-11-21 20:27:24.892395904 +0000 UTC m=+1250.078580948" watchObservedRunningTime="2025-11-21 20:27:24.923925268 +0000 UTC m=+1250.110110312" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942425 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-combined-ca-bundle\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942481 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-public-tls-certs\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942506 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-credential-keys\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942525 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-scripts\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942567 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-fernet-keys\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942615 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-public-tls-certs\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942667 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cfd4\" (UniqueName: \"kubernetes.io/projected/65240278-1e42-497b-938b-0eca28db9756-kube-api-access-8cfd4\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942689 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-config-data\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942729 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-config-data\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942777 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-combined-ca-bundle\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942812 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mngv\" (UniqueName: \"kubernetes.io/projected/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-kube-api-access-5mngv\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942857 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-scripts\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942900 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-internal-tls-certs\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942925 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65240278-1e42-497b-938b-0eca28db9756-logs\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.942941 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-internal-tls-certs\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.945441 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-kzpzx" podStartSLOduration=3.666578707 podStartE2EDuration="53.945418279s" podCreationTimestamp="2025-11-21 20:26:31 +0000 UTC" firstStartedPulling="2025-11-21 20:26:33.422546576 +0000 UTC m=+1198.608731620" lastFinishedPulling="2025-11-21 20:27:23.701386148 +0000 UTC m=+1248.887571192" observedRunningTime="2025-11-21 20:27:24.920358822 +0000 UTC m=+1250.106543896" watchObservedRunningTime="2025-11-21 20:27:24.945418279 +0000 UTC m=+1250.131603323" Nov 21 20:27:24 crc kubenswrapper[4727]: I1121 20:27:24.978901 4727 scope.go:117] "RemoveContainer" containerID="754d6bb1c99def91198eecbf6de8115c40ad6ba01d5ce5988ddc264f9b28bad7" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.013728 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-xl6pb"] Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.027953 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-xl6pb"] Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.046169 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-combined-ca-bundle\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.046642 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mngv\" (UniqueName: \"kubernetes.io/projected/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-kube-api-access-5mngv\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.046712 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-scripts\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.046773 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-internal-tls-certs\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.046800 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65240278-1e42-497b-938b-0eca28db9756-logs\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.046821 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-internal-tls-certs\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.046872 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-combined-ca-bundle\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.046904 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-public-tls-certs\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.046931 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-credential-keys\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.046981 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-scripts\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.047016 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-fernet-keys\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.047082 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-public-tls-certs\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.047176 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cfd4\" (UniqueName: \"kubernetes.io/projected/65240278-1e42-497b-938b-0eca28db9756-kube-api-access-8cfd4\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.047218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-config-data\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.047278 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-config-data\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.047553 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65240278-1e42-497b-938b-0eca28db9756-logs\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.051929 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-combined-ca-bundle\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.054976 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-public-tls-certs\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.055118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-internal-tls-certs\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.055483 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-config-data\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.055691 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-public-tls-certs\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.055710 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-internal-tls-certs\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.056602 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-credential-keys\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.059710 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-scripts\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.059766 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-config-data\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.065848 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-scripts\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.067746 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65240278-1e42-497b-938b-0eca28db9756-combined-ca-bundle\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.068904 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cfd4\" (UniqueName: \"kubernetes.io/projected/65240278-1e42-497b-938b-0eca28db9756-kube-api-access-8cfd4\") pod \"placement-59895c4888-ffr5c\" (UID: \"65240278-1e42-497b-938b-0eca28db9756\") " pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.095387 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-fernet-keys\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.095907 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mngv\" (UniqueName: \"kubernetes.io/projected/ac8be1ea-fec5-4c56-9602-9c9cdec8e812-kube-api-access-5mngv\") pod \"keystone-cc454d9c9-g5hjz\" (UID: \"ac8be1ea-fec5-4c56-9602-9c9cdec8e812\") " pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.148710 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.247713 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.545974 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f634df90-2dae-4e5b-b938-38d5d75f9b00" path="/var/lib/kubelet/pods/f634df90-2dae-4e5b-b938-38d5d75f9b00/volumes" Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.623742 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59895c4888-ffr5c"] Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.938867 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59895c4888-ffr5c" event={"ID":"65240278-1e42-497b-938b-0eca28db9756","Type":"ContainerStarted","Data":"f8276b5cb9edb0f7f79198950ebd30ca3d6fd2c8d84a9dfb4065a2c8ff356aad"} Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.942836 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kjphx" event={"ID":"f819a7d6-b6ab-409c-aaf2-e5044d9317d5","Type":"ContainerStarted","Data":"5cf3b56280d8a5d84f025edc25ea7b41c4359d8994cf362cf2a775c79c9a2942"} Nov 21 20:27:25 crc kubenswrapper[4727]: I1121 20:27:25.978251 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kjphx" podStartSLOduration=5.382383475 podStartE2EDuration="54.978226059s" podCreationTimestamp="2025-11-21 20:26:31 +0000 UTC" firstStartedPulling="2025-11-21 20:26:34.104861848 +0000 UTC m=+1199.291046892" lastFinishedPulling="2025-11-21 20:27:23.700704442 +0000 UTC m=+1248.886889476" observedRunningTime="2025-11-21 20:27:25.966479444 +0000 UTC m=+1251.152664488" watchObservedRunningTime="2025-11-21 20:27:25.978226059 +0000 UTC m=+1251.164411103" Nov 21 20:27:26 crc kubenswrapper[4727]: I1121 20:27:26.067532 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cc454d9c9-g5hjz"] Nov 21 20:27:26 crc kubenswrapper[4727]: W1121 20:27:26.072435 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac8be1ea_fec5_4c56_9602_9c9cdec8e812.slice/crio-0a6a1af6497ca57ab302600650b71a0a6faf95117c12c53fc9f0189e4152442e WatchSource:0}: Error finding container 0a6a1af6497ca57ab302600650b71a0a6faf95117c12c53fc9f0189e4152442e: Status 404 returned error can't find the container with id 0a6a1af6497ca57ab302600650b71a0a6faf95117c12c53fc9f0189e4152442e Nov 21 20:27:26 crc kubenswrapper[4727]: I1121 20:27:26.960915 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59895c4888-ffr5c" event={"ID":"65240278-1e42-497b-938b-0eca28db9756","Type":"ContainerStarted","Data":"b034d34ef0145657e4b1fee2a65a38f6e3e16020fd4e451a18ffa587949a2fcf"} Nov 21 20:27:26 crc kubenswrapper[4727]: I1121 20:27:26.961301 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59895c4888-ffr5c" event={"ID":"65240278-1e42-497b-938b-0eca28db9756","Type":"ContainerStarted","Data":"9a79feea8fd88eac1cba33d105c39123a808b262e9182ab2a6af2378503b7574"} Nov 21 20:27:26 crc kubenswrapper[4727]: I1121 20:27:26.961359 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:26 crc kubenswrapper[4727]: I1121 20:27:26.961380 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:26 crc kubenswrapper[4727]: I1121 20:27:26.969818 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cc454d9c9-g5hjz" event={"ID":"ac8be1ea-fec5-4c56-9602-9c9cdec8e812","Type":"ContainerStarted","Data":"cf261c7dbf2bf7c709d35515a7b873cdb5dba1a979eaf8c9570056f29fe2cc6b"} Nov 21 20:27:26 crc kubenswrapper[4727]: I1121 20:27:26.969889 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cc454d9c9-g5hjz" event={"ID":"ac8be1ea-fec5-4c56-9602-9c9cdec8e812","Type":"ContainerStarted","Data":"0a6a1af6497ca57ab302600650b71a0a6faf95117c12c53fc9f0189e4152442e"} Nov 21 20:27:26 crc kubenswrapper[4727]: I1121 20:27:26.970738 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:26 crc kubenswrapper[4727]: I1121 20:27:26.994931 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-59895c4888-ffr5c" podStartSLOduration=2.994898198 podStartE2EDuration="2.994898198s" podCreationTimestamp="2025-11-21 20:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:26.990008669 +0000 UTC m=+1252.176193713" watchObservedRunningTime="2025-11-21 20:27:26.994898198 +0000 UTC m=+1252.181083242" Nov 21 20:27:27 crc kubenswrapper[4727]: I1121 20:27:27.024811 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cc454d9c9-g5hjz" podStartSLOduration=3.024783903 podStartE2EDuration="3.024783903s" podCreationTimestamp="2025-11-21 20:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:27.018637863 +0000 UTC m=+1252.204822907" watchObservedRunningTime="2025-11-21 20:27:27.024783903 +0000 UTC m=+1252.210968947" Nov 21 20:27:27 crc kubenswrapper[4727]: I1121 20:27:27.281464 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 20:27:32 crc kubenswrapper[4727]: I1121 20:27:32.069748 4727 generic.go:334] "Generic (PLEG): container finished" podID="3219ae94-1940-49e8-851c-102a14d22e75" containerID="54520a982d3c60ed9d3e47f28be39eea2ecd8ee935c5eea737bd465ccd0236b4" exitCode=0 Nov 21 20:27:32 crc kubenswrapper[4727]: I1121 20:27:32.069837 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nvhzz" event={"ID":"3219ae94-1940-49e8-851c-102a14d22e75","Type":"ContainerDied","Data":"54520a982d3c60ed9d3e47f28be39eea2ecd8ee935c5eea737bd465ccd0236b4"} Nov 21 20:27:33 crc kubenswrapper[4727]: I1121 20:27:33.092994 4727 generic.go:334] "Generic (PLEG): container finished" podID="f819a7d6-b6ab-409c-aaf2-e5044d9317d5" containerID="5cf3b56280d8a5d84f025edc25ea7b41c4359d8994cf362cf2a775c79c9a2942" exitCode=0 Nov 21 20:27:33 crc kubenswrapper[4727]: I1121 20:27:33.093076 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kjphx" event={"ID":"f819a7d6-b6ab-409c-aaf2-e5044d9317d5","Type":"ContainerDied","Data":"5cf3b56280d8a5d84f025edc25ea7b41c4359d8994cf362cf2a775c79c9a2942"} Nov 21 20:27:33 crc kubenswrapper[4727]: I1121 20:27:33.095245 4727 generic.go:334] "Generic (PLEG): container finished" podID="ba5291ec-9ad1-4ce3-8794-6b6ca611b277" containerID="f27e6c54022e519d0083b15f3850f40204973d1d4c1fb611daeda5ce2ba58dc6" exitCode=0 Nov 21 20:27:33 crc kubenswrapper[4727]: I1121 20:27:33.095327 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kzpzx" event={"ID":"ba5291ec-9ad1-4ce3-8794-6b6ca611b277","Type":"ContainerDied","Data":"f27e6c54022e519d0083b15f3850f40204973d1d4c1fb611daeda5ce2ba58dc6"} Nov 21 20:27:33 crc kubenswrapper[4727]: I1121 20:27:33.823561 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:27:33 crc kubenswrapper[4727]: E1121 20:27:33.930280 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.026252 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-db-sync-config-data\") pod \"3219ae94-1940-49e8-851c-102a14d22e75\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.026666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm6kr\" (UniqueName: \"kubernetes.io/projected/3219ae94-1940-49e8-851c-102a14d22e75-kube-api-access-rm6kr\") pod \"3219ae94-1940-49e8-851c-102a14d22e75\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.027217 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-combined-ca-bundle\") pod \"3219ae94-1940-49e8-851c-102a14d22e75\" (UID: \"3219ae94-1940-49e8-851c-102a14d22e75\") " Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.030301 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3219ae94-1940-49e8-851c-102a14d22e75" (UID: "3219ae94-1940-49e8-851c-102a14d22e75"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.030896 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3219ae94-1940-49e8-851c-102a14d22e75-kube-api-access-rm6kr" (OuterVolumeSpecName: "kube-api-access-rm6kr") pod "3219ae94-1940-49e8-851c-102a14d22e75" (UID: "3219ae94-1940-49e8-851c-102a14d22e75"). InnerVolumeSpecName "kube-api-access-rm6kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.052967 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3219ae94-1940-49e8-851c-102a14d22e75" (UID: "3219ae94-1940-49e8-851c-102a14d22e75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.108291 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nvhzz" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.111093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nvhzz" event={"ID":"3219ae94-1940-49e8-851c-102a14d22e75","Type":"ContainerDied","Data":"659dac461540d044f7980bcebbdaeed98b16d08459d892258120c2063d40102c"} Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.111149 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="659dac461540d044f7980bcebbdaeed98b16d08459d892258120c2063d40102c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.113900 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab55a565-2af1-48bb-a31e-d0a8c738912c","Type":"ContainerStarted","Data":"260ab728f2a4aeafed812935744ae400da8ab9e340d4b06c6f957d022541b9cc"} Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.114200 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="ceilometer-notification-agent" containerID="cri-o://484eed23344aaf217be1c9fcd736765c9259cbacd696b8cdc3e0ce520f3d6db2" gracePeriod=30 Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.114241 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="sg-core" containerID="cri-o://3d968bc03dd61485ba8e3844312dbc1b38cdc9d8aa3444b3a7d1a559b854f939" gracePeriod=30 Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.114263 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="proxy-httpd" containerID="cri-o://260ab728f2a4aeafed812935744ae400da8ab9e340d4b06c6f957d022541b9cc" gracePeriod=30 Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.131315 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm6kr\" (UniqueName: \"kubernetes.io/projected/3219ae94-1940-49e8-851c-102a14d22e75-kube-api-access-rm6kr\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.131350 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.131360 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3219ae94-1940-49e8-851c-102a14d22e75-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.301508 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-76cd59449d-bqxf9"] Nov 21 20:27:34 crc kubenswrapper[4727]: E1121 20:27:34.302207 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3219ae94-1940-49e8-851c-102a14d22e75" containerName="barbican-db-sync" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.302222 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3219ae94-1940-49e8-851c-102a14d22e75" containerName="barbican-db-sync" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.302505 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3219ae94-1940-49e8-851c-102a14d22e75" containerName="barbican-db-sync" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.303649 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.313365 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-q6dpd" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.313562 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.313572 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.323662 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-556d8f578c-rnhrx"] Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.325554 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.331324 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.336181 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a929c8-c9a2-480f-b413-9df259d38d39-logs\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.336229 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krcw2\" (UniqueName: \"kubernetes.io/projected/76a929c8-c9a2-480f-b413-9df259d38d39-kube-api-access-krcw2\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.336251 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0776376-5bcf-42fb-95fa-537a1b9764e2-config-data-custom\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.336272 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76a929c8-c9a2-480f-b413-9df259d38d39-config-data-custom\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.336310 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0776376-5bcf-42fb-95fa-537a1b9764e2-logs\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.336339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mwcw\" (UniqueName: \"kubernetes.io/projected/c0776376-5bcf-42fb-95fa-537a1b9764e2-kube-api-access-4mwcw\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.336354 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a929c8-c9a2-480f-b413-9df259d38d39-combined-ca-bundle\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.336393 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0776376-5bcf-42fb-95fa-537a1b9764e2-config-data\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.336409 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0776376-5bcf-42fb-95fa-537a1b9764e2-combined-ca-bundle\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.336430 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a929c8-c9a2-480f-b413-9df259d38d39-config-data\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.341606 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-556d8f578c-rnhrx"] Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.358515 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-76cd59449d-bqxf9"] Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.390403 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-55b7c"] Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.392525 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.398821 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-55b7c"] Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.437905 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mwcw\" (UniqueName: \"kubernetes.io/projected/c0776376-5bcf-42fb-95fa-537a1b9764e2-kube-api-access-4mwcw\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.438137 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a929c8-c9a2-480f-b413-9df259d38d39-combined-ca-bundle\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.438226 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.438332 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0776376-5bcf-42fb-95fa-537a1b9764e2-config-data\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.438432 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0776376-5bcf-42fb-95fa-537a1b9764e2-combined-ca-bundle\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.438517 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a929c8-c9a2-480f-b413-9df259d38d39-config-data\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.438670 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.438755 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-config\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.438862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.438947 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87hq\" (UniqueName: \"kubernetes.io/projected/936596ec-3e5c-41ac-b454-5ca072843e7a-kube-api-access-w87hq\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.439048 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a929c8-c9a2-480f-b413-9df259d38d39-logs\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.439151 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krcw2\" (UniqueName: \"kubernetes.io/projected/76a929c8-c9a2-480f-b413-9df259d38d39-kube-api-access-krcw2\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.439263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0776376-5bcf-42fb-95fa-537a1b9764e2-config-data-custom\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.439346 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76a929c8-c9a2-480f-b413-9df259d38d39-config-data-custom\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.439480 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.439564 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0776376-5bcf-42fb-95fa-537a1b9764e2-logs\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.442401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76a929c8-c9a2-480f-b413-9df259d38d39-logs\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.444630 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a929c8-c9a2-480f-b413-9df259d38d39-config-data\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.446840 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0776376-5bcf-42fb-95fa-537a1b9764e2-logs\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.449134 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0776376-5bcf-42fb-95fa-537a1b9764e2-combined-ca-bundle\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.449139 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0776376-5bcf-42fb-95fa-537a1b9764e2-config-data-custom\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.450735 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76a929c8-c9a2-480f-b413-9df259d38d39-config-data-custom\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.460647 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a929c8-c9a2-480f-b413-9df259d38d39-combined-ca-bundle\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.461502 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mwcw\" (UniqueName: \"kubernetes.io/projected/c0776376-5bcf-42fb-95fa-537a1b9764e2-kube-api-access-4mwcw\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.472060 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0776376-5bcf-42fb-95fa-537a1b9764e2-config-data\") pod \"barbican-keystone-listener-76cd59449d-bqxf9\" (UID: \"c0776376-5bcf-42fb-95fa-537a1b9764e2\") " pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.476828 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krcw2\" (UniqueName: \"kubernetes.io/projected/76a929c8-c9a2-480f-b413-9df259d38d39-kube-api-access-krcw2\") pod \"barbican-worker-556d8f578c-rnhrx\" (UID: \"76a929c8-c9a2-480f-b413-9df259d38d39\") " pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.498642 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59c5b86cb6-fqnd7"] Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.500776 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.510880 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.511580 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59c5b86cb6-fqnd7"] Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541270 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541326 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data-custom\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541368 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w87hq\" (UniqueName: \"kubernetes.io/projected/936596ec-3e5c-41ac-b454-5ca072843e7a-kube-api-access-w87hq\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541389 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-combined-ca-bundle\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541496 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541549 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541571 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541634 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpw9l\" (UniqueName: \"kubernetes.io/projected/a9717694-2709-4e01-a03c-ffd0a9ea5c42-kube-api-access-mpw9l\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541683 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9717694-2709-4e01-a03c-ffd0a9ea5c42-logs\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541758 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.541800 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-config\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.542782 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-config\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.543407 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.544720 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.545122 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.547258 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.580745 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w87hq\" (UniqueName: \"kubernetes.io/projected/936596ec-3e5c-41ac-b454-5ca072843e7a-kube-api-access-w87hq\") pod \"dnsmasq-dns-848cf88cfc-55b7c\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.646554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.646625 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpw9l\" (UniqueName: \"kubernetes.io/projected/a9717694-2709-4e01-a03c-ffd0a9ea5c42-kube-api-access-mpw9l\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.646657 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9717694-2709-4e01-a03c-ffd0a9ea5c42-logs\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.646719 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data-custom\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.646752 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-combined-ca-bundle\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.647383 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9717694-2709-4e01-a03c-ffd0a9ea5c42-logs\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.650266 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.658010 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.660786 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-556d8f578c-rnhrx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.662988 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data-custom\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.669566 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-combined-ca-bundle\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.714538 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.747269 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpw9l\" (UniqueName: \"kubernetes.io/projected/a9717694-2709-4e01-a03c-ffd0a9ea5c42-kube-api-access-mpw9l\") pod \"barbican-api-59c5b86cb6-fqnd7\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.827167 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.908470 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kjphx" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.960723 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-db-sync-config-data\") pod \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.961199 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-config-data\") pod \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.961344 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-combined-ca-bundle\") pod \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.961385 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbnkl\" (UniqueName: \"kubernetes.io/projected/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-kube-api-access-fbnkl\") pod \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.961460 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-scripts\") pod \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.961546 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-etc-machine-id\") pod \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\" (UID: \"f819a7d6-b6ab-409c-aaf2-e5044d9317d5\") " Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.962189 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f819a7d6-b6ab-409c-aaf2-e5044d9317d5" (UID: "f819a7d6-b6ab-409c-aaf2-e5044d9317d5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.968603 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f819a7d6-b6ab-409c-aaf2-e5044d9317d5" (UID: "f819a7d6-b6ab-409c-aaf2-e5044d9317d5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.974790 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-kube-api-access-fbnkl" (OuterVolumeSpecName: "kube-api-access-fbnkl") pod "f819a7d6-b6ab-409c-aaf2-e5044d9317d5" (UID: "f819a7d6-b6ab-409c-aaf2-e5044d9317d5"). InnerVolumeSpecName "kube-api-access-fbnkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:34 crc kubenswrapper[4727]: I1121 20:27:34.975637 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-scripts" (OuterVolumeSpecName: "scripts") pod "f819a7d6-b6ab-409c-aaf2-e5044d9317d5" (UID: "f819a7d6-b6ab-409c-aaf2-e5044d9317d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.020472 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f819a7d6-b6ab-409c-aaf2-e5044d9317d5" (UID: "f819a7d6-b6ab-409c-aaf2-e5044d9317d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.040116 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kzpzx" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.062864 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-combined-ca-bundle\") pod \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.063012 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-config-data\") pod \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.063065 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-254k6\" (UniqueName: \"kubernetes.io/projected/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-kube-api-access-254k6\") pod \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\" (UID: \"ba5291ec-9ad1-4ce3-8794-6b6ca611b277\") " Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.064208 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.064222 4727 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.064241 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.064250 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbnkl\" (UniqueName: \"kubernetes.io/projected/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-kube-api-access-fbnkl\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.064981 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.068199 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-kube-api-access-254k6" (OuterVolumeSpecName: "kube-api-access-254k6") pod "ba5291ec-9ad1-4ce3-8794-6b6ca611b277" (UID: "ba5291ec-9ad1-4ce3-8794-6b6ca611b277"). InnerVolumeSpecName "kube-api-access-254k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.081180 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-config-data" (OuterVolumeSpecName: "config-data") pod "f819a7d6-b6ab-409c-aaf2-e5044d9317d5" (UID: "f819a7d6-b6ab-409c-aaf2-e5044d9317d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.138081 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kjphx" event={"ID":"f819a7d6-b6ab-409c-aaf2-e5044d9317d5","Type":"ContainerDied","Data":"97234f41f23a6fdb1313d8ec481de06897693d092964aa934dfbfc9e81d148e9"} Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.138117 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97234f41f23a6fdb1313d8ec481de06897693d092964aa934dfbfc9e81d148e9" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.138129 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kjphx" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.140290 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba5291ec-9ad1-4ce3-8794-6b6ca611b277" (UID: "ba5291ec-9ad1-4ce3-8794-6b6ca611b277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.163565 4727 generic.go:334] "Generic (PLEG): container finished" podID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerID="260ab728f2a4aeafed812935744ae400da8ab9e340d4b06c6f957d022541b9cc" exitCode=0 Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.163600 4727 generic.go:334] "Generic (PLEG): container finished" podID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerID="3d968bc03dd61485ba8e3844312dbc1b38cdc9d8aa3444b3a7d1a559b854f939" exitCode=2 Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.163664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab55a565-2af1-48bb-a31e-d0a8c738912c","Type":"ContainerDied","Data":"260ab728f2a4aeafed812935744ae400da8ab9e340d4b06c6f957d022541b9cc"} Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.163688 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab55a565-2af1-48bb-a31e-d0a8c738912c","Type":"ContainerDied","Data":"3d968bc03dd61485ba8e3844312dbc1b38cdc9d8aa3444b3a7d1a559b854f939"} Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.167890 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.167919 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f819a7d6-b6ab-409c-aaf2-e5044d9317d5-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.167929 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-254k6\" (UniqueName: \"kubernetes.io/projected/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-kube-api-access-254k6\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.171528 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kzpzx" event={"ID":"ba5291ec-9ad1-4ce3-8794-6b6ca611b277","Type":"ContainerDied","Data":"e231612893f8605e49ff827b4acadcf8e12579fb53f2ed061545a19c226c6a53"} Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.171557 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e231612893f8605e49ff827b4acadcf8e12579fb53f2ed061545a19c226c6a53" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.171606 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kzpzx" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.222310 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-config-data" (OuterVolumeSpecName: "config-data") pod "ba5291ec-9ad1-4ce3-8794-6b6ca611b277" (UID: "ba5291ec-9ad1-4ce3-8794-6b6ca611b277"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.269922 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5291ec-9ad1-4ce3-8794-6b6ca611b277-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.455291 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 20:27:35 crc kubenswrapper[4727]: E1121 20:27:35.456184 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5291ec-9ad1-4ce3-8794-6b6ca611b277" containerName="heat-db-sync" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.456204 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5291ec-9ad1-4ce3-8794-6b6ca611b277" containerName="heat-db-sync" Nov 21 20:27:35 crc kubenswrapper[4727]: E1121 20:27:35.456241 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f819a7d6-b6ab-409c-aaf2-e5044d9317d5" containerName="cinder-db-sync" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.456249 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f819a7d6-b6ab-409c-aaf2-e5044d9317d5" containerName="cinder-db-sync" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.456545 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f819a7d6-b6ab-409c-aaf2-e5044d9317d5" containerName="cinder-db-sync" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.456562 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5291ec-9ad1-4ce3-8794-6b6ca611b277" containerName="heat-db-sync" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.458199 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.464743 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tvpnn" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.464948 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.465002 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.489430 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.560202 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-scripts\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.560354 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fd7eeb-6d98-4398-af52-f369b88f6d37-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.560391 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.560463 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.560541 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb68f\" (UniqueName: \"kubernetes.io/projected/98fd7eeb-6d98-4398-af52-f369b88f6d37-kube-api-access-qb68f\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.560621 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.628859 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-76cd59449d-bqxf9"] Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.628896 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.636562 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-55b7c"] Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.664177 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-49tk2"] Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.666233 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.668217 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-scripts\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.668278 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fd7eeb-6d98-4398-af52-f369b88f6d37-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.668300 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.668334 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.668364 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb68f\" (UniqueName: \"kubernetes.io/projected/98fd7eeb-6d98-4398-af52-f369b88f6d37-kube-api-access-qb68f\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.668400 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.671446 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fd7eeb-6d98-4398-af52-f369b88f6d37-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.675610 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-scripts\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.680872 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.680973 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-49tk2"] Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.684747 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.691929 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-556d8f578c-rnhrx"] Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.692375 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.715391 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb68f\" (UniqueName: \"kubernetes.io/projected/98fd7eeb-6d98-4398-af52-f369b88f6d37-kube-api-access-qb68f\") pod \"cinder-scheduler-0\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.718002 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.753453 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.756858 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.763359 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.770087 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-config\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.770264 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-svc\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.770350 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.770478 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.770647 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkjc2\" (UniqueName: \"kubernetes.io/projected/44ac5098-bcbb-4981-804e-3da5706fa3cb-kube-api-access-lkjc2\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.770758 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.838040 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881016 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20e5ca35-9016-499e-908a-15ad18d11d04-etc-machine-id\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881105 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881137 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkjc2\" (UniqueName: \"kubernetes.io/projected/44ac5098-bcbb-4981-804e-3da5706fa3cb-kube-api-access-lkjc2\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881177 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881201 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881227 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data-custom\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881256 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-config\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881272 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-svc\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881296 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881321 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20e5ca35-9016-499e-908a-15ad18d11d04-logs\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881345 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vvk9\" (UniqueName: \"kubernetes.io/projected/20e5ca35-9016-499e-908a-15ad18d11d04-kube-api-access-8vvk9\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.881377 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-scripts\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.882331 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-config\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.882642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-svc\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.882739 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.883090 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.883877 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.883989 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.944396 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkjc2\" (UniqueName: \"kubernetes.io/projected/44ac5098-bcbb-4981-804e-3da5706fa3cb-kube-api-access-lkjc2\") pod \"dnsmasq-dns-6578955fd5-49tk2\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.951212 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59c5b86cb6-fqnd7"] Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.985618 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20e5ca35-9016-499e-908a-15ad18d11d04-etc-machine-id\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.985731 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.985811 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.985923 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data-custom\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.986043 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20e5ca35-9016-499e-908a-15ad18d11d04-logs\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.986066 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvk9\" (UniqueName: \"kubernetes.io/projected/20e5ca35-9016-499e-908a-15ad18d11d04-kube-api-access-8vvk9\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.986113 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-scripts\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.989570 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20e5ca35-9016-499e-908a-15ad18d11d04-logs\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.990202 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20e5ca35-9016-499e-908a-15ad18d11d04-etc-machine-id\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.997635 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-scripts\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.997997 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.998650 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data-custom\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:35 crc kubenswrapper[4727]: I1121 20:27:35.998886 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.006149 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-55b7c"] Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.009431 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvk9\" (UniqueName: \"kubernetes.io/projected/20e5ca35-9016-499e-908a-15ad18d11d04-kube-api-access-8vvk9\") pod \"cinder-api-0\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " pod="openstack/cinder-api-0" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.112952 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.126034 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.193718 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c5b86cb6-fqnd7" event={"ID":"a9717694-2709-4e01-a03c-ffd0a9ea5c42","Type":"ContainerStarted","Data":"a2828c1c58509b1cd6477398db173cf4cac65adab3a6d1d843f298e581e731aa"} Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.195046 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" event={"ID":"c0776376-5bcf-42fb-95fa-537a1b9764e2","Type":"ContainerStarted","Data":"740ac881e1bd241c3e4827c88c3bb660366286de7d4fd9fbcb9392bef47acdf4"} Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.196483 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" event={"ID":"936596ec-3e5c-41ac-b454-5ca072843e7a","Type":"ContainerStarted","Data":"1a5b2dd8327da8ec3e6c6873e7c645436d640bcf470d470129bad42fcac14d97"} Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.202156 4727 generic.go:334] "Generic (PLEG): container finished" podID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerID="484eed23344aaf217be1c9fcd736765c9259cbacd696b8cdc3e0ce520f3d6db2" exitCode=0 Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.202197 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab55a565-2af1-48bb-a31e-d0a8c738912c","Type":"ContainerDied","Data":"484eed23344aaf217be1c9fcd736765c9259cbacd696b8cdc3e0ce520f3d6db2"} Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.207029 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-556d8f578c-rnhrx" event={"ID":"76a929c8-c9a2-480f-b413-9df259d38d39","Type":"ContainerStarted","Data":"c55385cfbfa9a1d0a6552224680d43615fd764f15312f6e2cef3c76dbce08419"} Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.450252 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.548761 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.604662 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-run-httpd\") pod \"ab55a565-2af1-48bb-a31e-d0a8c738912c\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.604718 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-sg-core-conf-yaml\") pod \"ab55a565-2af1-48bb-a31e-d0a8c738912c\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.604790 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-scripts\") pod \"ab55a565-2af1-48bb-a31e-d0a8c738912c\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.604853 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vncrb\" (UniqueName: \"kubernetes.io/projected/ab55a565-2af1-48bb-a31e-d0a8c738912c-kube-api-access-vncrb\") pod \"ab55a565-2af1-48bb-a31e-d0a8c738912c\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.604905 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-log-httpd\") pod \"ab55a565-2af1-48bb-a31e-d0a8c738912c\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.604927 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-combined-ca-bundle\") pod \"ab55a565-2af1-48bb-a31e-d0a8c738912c\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.604985 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-config-data\") pod \"ab55a565-2af1-48bb-a31e-d0a8c738912c\" (UID: \"ab55a565-2af1-48bb-a31e-d0a8c738912c\") " Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.606402 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab55a565-2af1-48bb-a31e-d0a8c738912c" (UID: "ab55a565-2af1-48bb-a31e-d0a8c738912c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.610305 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab55a565-2af1-48bb-a31e-d0a8c738912c" (UID: "ab55a565-2af1-48bb-a31e-d0a8c738912c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.616269 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-scripts" (OuterVolumeSpecName: "scripts") pod "ab55a565-2af1-48bb-a31e-d0a8c738912c" (UID: "ab55a565-2af1-48bb-a31e-d0a8c738912c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.616419 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab55a565-2af1-48bb-a31e-d0a8c738912c-kube-api-access-vncrb" (OuterVolumeSpecName: "kube-api-access-vncrb") pod "ab55a565-2af1-48bb-a31e-d0a8c738912c" (UID: "ab55a565-2af1-48bb-a31e-d0a8c738912c"). InnerVolumeSpecName "kube-api-access-vncrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.699181 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab55a565-2af1-48bb-a31e-d0a8c738912c" (UID: "ab55a565-2af1-48bb-a31e-d0a8c738912c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.707687 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vncrb\" (UniqueName: \"kubernetes.io/projected/ab55a565-2af1-48bb-a31e-d0a8c738912c-kube-api-access-vncrb\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.707719 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.707728 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab55a565-2af1-48bb-a31e-d0a8c738912c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.707738 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.707748 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.776157 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab55a565-2af1-48bb-a31e-d0a8c738912c" (UID: "ab55a565-2af1-48bb-a31e-d0a8c738912c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.839497 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.895109 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-config-data" (OuterVolumeSpecName: "config-data") pod "ab55a565-2af1-48bb-a31e-d0a8c738912c" (UID: "ab55a565-2af1-48bb-a31e-d0a8c738912c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.925980 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-49tk2"] Nov 21 20:27:36 crc kubenswrapper[4727]: I1121 20:27:36.943810 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab55a565-2af1-48bb-a31e-d0a8c738912c-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.085107 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.231867 4727 generic.go:334] "Generic (PLEG): container finished" podID="936596ec-3e5c-41ac-b454-5ca072843e7a" containerID="db949be0d8c9c0a232e5b0803831fa447c71572d96baf25f1a4f231eb6202373" exitCode=0 Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.231995 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" event={"ID":"936596ec-3e5c-41ac-b454-5ca072843e7a","Type":"ContainerDied","Data":"db949be0d8c9c0a232e5b0803831fa447c71572d96baf25f1a4f231eb6202373"} Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.238889 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" event={"ID":"44ac5098-bcbb-4981-804e-3da5706fa3cb","Type":"ContainerStarted","Data":"fac2ec30e6ff94cca5a6c8c9025c4b91a6eded2b75c0db4db47dafc94bd346bf"} Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.252665 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab55a565-2af1-48bb-a31e-d0a8c738912c","Type":"ContainerDied","Data":"3052dd83ceb177cca7d883a2ae31cb04ecfee6a7a3601f83e578cbf434fd1ded"} Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.252732 4727 scope.go:117] "RemoveContainer" containerID="260ab728f2a4aeafed812935744ae400da8ab9e340d4b06c6f957d022541b9cc" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.252919 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.259293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"20e5ca35-9016-499e-908a-15ad18d11d04","Type":"ContainerStarted","Data":"a669ec1e5152008f20fcb8654db1d75659d22ece74a54d947f34a63d565fecee"} Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.263009 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fd7eeb-6d98-4398-af52-f369b88f6d37","Type":"ContainerStarted","Data":"5b372bc7bf41aca1568d989d188bc16859b6f37467bfa3e4388668d70bc6ff1e"} Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.267883 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c5b86cb6-fqnd7" event={"ID":"a9717694-2709-4e01-a03c-ffd0a9ea5c42","Type":"ContainerStarted","Data":"d3d01d2ec5717e02e15f98823e2a16ac99c22596096b22e12ed9863161c31561"} Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.327706 4727 scope.go:117] "RemoveContainer" containerID="3d968bc03dd61485ba8e3844312dbc1b38cdc9d8aa3444b3a7d1a559b854f939" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.354578 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.369090 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.379902 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:27:37 crc kubenswrapper[4727]: E1121 20:27:37.382265 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="proxy-httpd" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.382297 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="proxy-httpd" Nov 21 20:27:37 crc kubenswrapper[4727]: E1121 20:27:37.382344 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="sg-core" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.382352 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="sg-core" Nov 21 20:27:37 crc kubenswrapper[4727]: E1121 20:27:37.382381 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="ceilometer-notification-agent" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.382387 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="ceilometer-notification-agent" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.383385 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="sg-core" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.383437 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="proxy-httpd" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.383469 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" containerName="ceilometer-notification-agent" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.387062 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.389615 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.391421 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.391925 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.537137 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab55a565-2af1-48bb-a31e-d0a8c738912c" path="/var/lib/kubelet/pods/ab55a565-2af1-48bb-a31e-d0a8c738912c/volumes" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.558825 4727 scope.go:117] "RemoveContainer" containerID="484eed23344aaf217be1c9fcd736765c9259cbacd696b8cdc3e0ce520f3d6db2" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.563441 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfjb\" (UniqueName: \"kubernetes.io/projected/bce1af82-ae9a-4eda-906c-e500a2727f27-kube-api-access-2rfjb\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.563504 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.563587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-log-httpd\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.563630 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-scripts\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.563652 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-config-data\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.563679 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-run-httpd\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.563706 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.655103 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.665398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-log-httpd\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.665476 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-scripts\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.665499 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-config-data\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.665530 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-run-httpd\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.665559 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.665602 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfjb\" (UniqueName: \"kubernetes.io/projected/bce1af82-ae9a-4eda-906c-e500a2727f27-kube-api-access-2rfjb\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.665651 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.666065 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-log-httpd\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.671236 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.672802 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-run-httpd\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.672822 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-config-data\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.674371 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-scripts\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.685135 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.689134 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfjb\" (UniqueName: \"kubernetes.io/projected/bce1af82-ae9a-4eda-906c-e500a2727f27-kube-api-access-2rfjb\") pod \"ceilometer-0\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " pod="openstack/ceilometer-0" Nov 21 20:27:37 crc kubenswrapper[4727]: I1121 20:27:37.737365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:27:38 crc kubenswrapper[4727]: I1121 20:27:38.294004 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c5b86cb6-fqnd7" event={"ID":"a9717694-2709-4e01-a03c-ffd0a9ea5c42","Type":"ContainerStarted","Data":"ff0ceab44c29e15592946609ed1f77dce143d2f10a94426cbc2705b7928467fd"} Nov 21 20:27:38 crc kubenswrapper[4727]: I1121 20:27:38.294488 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:38 crc kubenswrapper[4727]: I1121 20:27:38.294534 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:38 crc kubenswrapper[4727]: I1121 20:27:38.307465 4727 generic.go:334] "Generic (PLEG): container finished" podID="44ac5098-bcbb-4981-804e-3da5706fa3cb" containerID="b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf" exitCode=0 Nov 21 20:27:38 crc kubenswrapper[4727]: I1121 20:27:38.307545 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" event={"ID":"44ac5098-bcbb-4981-804e-3da5706fa3cb","Type":"ContainerDied","Data":"b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf"} Nov 21 20:27:38 crc kubenswrapper[4727]: I1121 20:27:38.322865 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59c5b86cb6-fqnd7" podStartSLOduration=4.322848149 podStartE2EDuration="4.322848149s" podCreationTimestamp="2025-11-21 20:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:38.317624512 +0000 UTC m=+1263.503809556" watchObservedRunningTime="2025-11-21 20:27:38.322848149 +0000 UTC m=+1263.509033193" Nov 21 20:27:38 crc kubenswrapper[4727]: I1121 20:27:38.902228 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:38 crc kubenswrapper[4727]: I1121 20:27:38.930807 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.029454 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-config\") pod \"936596ec-3e5c-41ac-b454-5ca072843e7a\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.031309 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-swift-storage-0\") pod \"936596ec-3e5c-41ac-b454-5ca072843e7a\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.031477 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-nb\") pod \"936596ec-3e5c-41ac-b454-5ca072843e7a\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.031665 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-sb\") pod \"936596ec-3e5c-41ac-b454-5ca072843e7a\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.031938 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-svc\") pod \"936596ec-3e5c-41ac-b454-5ca072843e7a\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.032006 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w87hq\" (UniqueName: \"kubernetes.io/projected/936596ec-3e5c-41ac-b454-5ca072843e7a-kube-api-access-w87hq\") pod \"936596ec-3e5c-41ac-b454-5ca072843e7a\" (UID: \"936596ec-3e5c-41ac-b454-5ca072843e7a\") " Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.043982 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/936596ec-3e5c-41ac-b454-5ca072843e7a-kube-api-access-w87hq" (OuterVolumeSpecName: "kube-api-access-w87hq") pod "936596ec-3e5c-41ac-b454-5ca072843e7a" (UID: "936596ec-3e5c-41ac-b454-5ca072843e7a"). InnerVolumeSpecName "kube-api-access-w87hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.092775 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "936596ec-3e5c-41ac-b454-5ca072843e7a" (UID: "936596ec-3e5c-41ac-b454-5ca072843e7a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.093857 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-config" (OuterVolumeSpecName: "config") pod "936596ec-3e5c-41ac-b454-5ca072843e7a" (UID: "936596ec-3e5c-41ac-b454-5ca072843e7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.105109 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "936596ec-3e5c-41ac-b454-5ca072843e7a" (UID: "936596ec-3e5c-41ac-b454-5ca072843e7a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.107578 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "936596ec-3e5c-41ac-b454-5ca072843e7a" (UID: "936596ec-3e5c-41ac-b454-5ca072843e7a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.135458 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.135491 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w87hq\" (UniqueName: \"kubernetes.io/projected/936596ec-3e5c-41ac-b454-5ca072843e7a-kube-api-access-w87hq\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.135504 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.135513 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.135523 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.207391 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "936596ec-3e5c-41ac-b454-5ca072843e7a" (UID: "936596ec-3e5c-41ac-b454-5ca072843e7a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.238213 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936596ec-3e5c-41ac-b454-5ca072843e7a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.329289 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.330580 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-55b7c" event={"ID":"936596ec-3e5c-41ac-b454-5ca072843e7a","Type":"ContainerDied","Data":"1a5b2dd8327da8ec3e6c6873e7c645436d640bcf470d470129bad42fcac14d97"} Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.330626 4727 scope.go:117] "RemoveContainer" containerID="db949be0d8c9c0a232e5b0803831fa447c71572d96baf25f1a4f231eb6202373" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.347478 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" event={"ID":"44ac5098-bcbb-4981-804e-3da5706fa3cb","Type":"ContainerStarted","Data":"c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88"} Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.347695 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.381976 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.395378 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-556d8f578c-rnhrx" event={"ID":"76a929c8-c9a2-480f-b413-9df259d38d39","Type":"ContainerStarted","Data":"4940ce55c3176271f994c159c7d828bb7cc332e321a5fc42d8bde9ce45c17485"} Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.420317 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-55b7c"] Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.452907 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-55b7c"] Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.477105 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" podStartSLOduration=4.477085292 podStartE2EDuration="4.477085292s" podCreationTimestamp="2025-11-21 20:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:39.469657563 +0000 UTC m=+1264.655842607" watchObservedRunningTime="2025-11-21 20:27:39.477085292 +0000 UTC m=+1264.663270336" Nov 21 20:27:39 crc kubenswrapper[4727]: I1121 20:27:39.570872 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="936596ec-3e5c-41ac-b454-5ca072843e7a" path="/var/lib/kubelet/pods/936596ec-3e5c-41ac-b454-5ca072843e7a/volumes" Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.475507 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-556d8f578c-rnhrx" event={"ID":"76a929c8-c9a2-480f-b413-9df259d38d39","Type":"ContainerStarted","Data":"99613fb53f39127674255a386786b8fbda87e06f44668f0b1ed19a03adca0ceb"} Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.490617 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"20e5ca35-9016-499e-908a-15ad18d11d04","Type":"ContainerStarted","Data":"369215486f3b16df159fba213ea1cbd356f1fa2ee51bbe03cf576e2dcf446b6d"} Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.510106 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d66895777-vztk9" Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.516069 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fd7eeb-6d98-4398-af52-f369b88f6d37","Type":"ContainerStarted","Data":"4dc5b1e072ab77ae18e04b1f3acd8a7ff182ad15cddb4443e452392a74de4263"} Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.518284 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-556d8f578c-rnhrx" podStartSLOduration=3.286762439 podStartE2EDuration="6.518246725s" podCreationTimestamp="2025-11-21 20:27:34 +0000 UTC" firstStartedPulling="2025-11-21 20:27:35.618144214 +0000 UTC m=+1260.804329258" lastFinishedPulling="2025-11-21 20:27:38.8496285 +0000 UTC m=+1264.035813544" observedRunningTime="2025-11-21 20:27:40.499743277 +0000 UTC m=+1265.685928321" watchObservedRunningTime="2025-11-21 20:27:40.518246725 +0000 UTC m=+1265.704431769" Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.528042 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" event={"ID":"c0776376-5bcf-42fb-95fa-537a1b9764e2","Type":"ContainerStarted","Data":"edc67d88af290e659d5de1788b4c17ba581ba229756207d6dec63a94758ecd0f"} Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.528119 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" event={"ID":"c0776376-5bcf-42fb-95fa-537a1b9764e2","Type":"ContainerStarted","Data":"a07b1e5bd709366fbd0ee1516459bf5020c050a08ef2d89910472234b19d7d2a"} Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.545255 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bce1af82-ae9a-4eda-906c-e500a2727f27","Type":"ContainerStarted","Data":"255a4285971958a34687aa10855c3a5749ba59c104a8da55232dd211190ae0f3"} Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.621061 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-76cd59449d-bqxf9" podStartSLOduration=3.278020288 podStartE2EDuration="6.621040138s" podCreationTimestamp="2025-11-21 20:27:34 +0000 UTC" firstStartedPulling="2025-11-21 20:27:35.533521403 +0000 UTC m=+1260.719706447" lastFinishedPulling="2025-11-21 20:27:38.876541253 +0000 UTC m=+1264.062726297" observedRunningTime="2025-11-21 20:27:40.57204219 +0000 UTC m=+1265.758227234" watchObservedRunningTime="2025-11-21 20:27:40.621040138 +0000 UTC m=+1265.807225182" Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.699099 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8476b67874-f2dtk"] Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.699763 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8476b67874-f2dtk" podUID="b4d6995f-166c-410a-adee-3733a25c28df" containerName="neutron-api" containerID="cri-o://364f5a4a47bb2ca8f0327214f490df2d67e76c1d1aa242e30cd8041e08a9bad4" gracePeriod=30 Nov 21 20:27:40 crc kubenswrapper[4727]: I1121 20:27:40.700057 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8476b67874-f2dtk" podUID="b4d6995f-166c-410a-adee-3733a25c28df" containerName="neutron-httpd" containerID="cri-o://dfe330a320ec02ed64a189754512202a3865e294e595fac2ad0da8d964dd0603" gracePeriod=30 Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.515046 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cc6c58b8b-47npz"] Nov 21 20:27:41 crc kubenswrapper[4727]: E1121 20:27:41.516212 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936596ec-3e5c-41ac-b454-5ca072843e7a" containerName="init" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.516234 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="936596ec-3e5c-41ac-b454-5ca072843e7a" containerName="init" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.516494 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="936596ec-3e5c-41ac-b454-5ca072843e7a" containerName="init" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.517872 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.520810 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.522609 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.548822 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-public-tls-certs\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.549094 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-logs\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.549296 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-config-data\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.549540 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-config-data-custom\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.549646 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-internal-tls-certs\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.549813 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp2m7\" (UniqueName: \"kubernetes.io/projected/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-kube-api-access-dp2m7\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.549882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-combined-ca-bundle\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.563094 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bce1af82-ae9a-4eda-906c-e500a2727f27","Type":"ContainerStarted","Data":"09df844fb72c99fbb281a5d5d2262f31e73fc33a9962897ebf4ad4ca2530c41b"} Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.565131 4727 generic.go:334] "Generic (PLEG): container finished" podID="b4d6995f-166c-410a-adee-3733a25c28df" containerID="dfe330a320ec02ed64a189754512202a3865e294e595fac2ad0da8d964dd0603" exitCode=0 Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.565189 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8476b67874-f2dtk" event={"ID":"b4d6995f-166c-410a-adee-3733a25c28df","Type":"ContainerDied","Data":"dfe330a320ec02ed64a189754512202a3865e294e595fac2ad0da8d964dd0603"} Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.568057 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fd7eeb-6d98-4398-af52-f369b88f6d37","Type":"ContainerStarted","Data":"761e55b2dce91b5cc3d521b332f0606c97997c104aee2a7c65b79a43635b9ba7"} Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.574070 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"20e5ca35-9016-499e-908a-15ad18d11d04","Type":"ContainerStarted","Data":"b4816a197616bd3e75b9a4bbf9b7b1e6fc961138bccc0ce948bc961650827314"} Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.574568 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="20e5ca35-9016-499e-908a-15ad18d11d04" containerName="cinder-api-log" containerID="cri-o://369215486f3b16df159fba213ea1cbd356f1fa2ee51bbe03cf576e2dcf446b6d" gracePeriod=30 Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.574748 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="20e5ca35-9016-499e-908a-15ad18d11d04" containerName="cinder-api" containerID="cri-o://b4816a197616bd3e75b9a4bbf9b7b1e6fc961138bccc0ce948bc961650827314" gracePeriod=30 Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.594033 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cc6c58b8b-47npz"] Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.597428 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.491116984 podStartE2EDuration="6.59741563s" podCreationTimestamp="2025-11-21 20:27:35 +0000 UTC" firstStartedPulling="2025-11-21 20:27:36.527224165 +0000 UTC m=+1261.713409209" lastFinishedPulling="2025-11-21 20:27:38.633522801 +0000 UTC m=+1263.819707855" observedRunningTime="2025-11-21 20:27:41.596382744 +0000 UTC m=+1266.782567788" watchObservedRunningTime="2025-11-21 20:27:41.59741563 +0000 UTC m=+1266.783600674" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.639628 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.639607042 podStartE2EDuration="6.639607042s" podCreationTimestamp="2025-11-21 20:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:41.625851179 +0000 UTC m=+1266.812036223" watchObservedRunningTime="2025-11-21 20:27:41.639607042 +0000 UTC m=+1266.825792086" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.651025 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-logs\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.651107 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-config-data\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.651187 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-config-data-custom\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.651221 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-internal-tls-certs\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.651252 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp2m7\" (UniqueName: \"kubernetes.io/projected/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-kube-api-access-dp2m7\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.651275 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-combined-ca-bundle\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.651412 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-public-tls-certs\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.652334 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-logs\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.658634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-public-tls-certs\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.659709 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-internal-tls-certs\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.662940 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-config-data-custom\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.667914 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-config-data\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.673831 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-combined-ca-bundle\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.677782 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp2m7\" (UniqueName: \"kubernetes.io/projected/c49bf05b-cf2c-4e00-a141-a7249d2eb68f-kube-api-access-dp2m7\") pod \"barbican-api-6cc6c58b8b-47npz\" (UID: \"c49bf05b-cf2c-4e00-a141-a7249d2eb68f\") " pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:41 crc kubenswrapper[4727]: I1121 20:27:41.835709 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:42 crc kubenswrapper[4727]: I1121 20:27:42.609283 4727 generic.go:334] "Generic (PLEG): container finished" podID="20e5ca35-9016-499e-908a-15ad18d11d04" containerID="369215486f3b16df159fba213ea1cbd356f1fa2ee51bbe03cf576e2dcf446b6d" exitCode=143 Nov 21 20:27:42 crc kubenswrapper[4727]: I1121 20:27:42.609603 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"20e5ca35-9016-499e-908a-15ad18d11d04","Type":"ContainerDied","Data":"369215486f3b16df159fba213ea1cbd356f1fa2ee51bbe03cf576e2dcf446b6d"} Nov 21 20:27:42 crc kubenswrapper[4727]: I1121 20:27:42.619654 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bce1af82-ae9a-4eda-906c-e500a2727f27","Type":"ContainerStarted","Data":"48ba325f33bfd261de135825539fe6159d5408213e28c733ddcdad1dcba3a1fa"} Nov 21 20:27:42 crc kubenswrapper[4727]: I1121 20:27:42.619699 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bce1af82-ae9a-4eda-906c-e500a2727f27","Type":"ContainerStarted","Data":"a90fadad2fb926a3723ac0221e60db1ea2c37ff7d0dfd0bb5cffe015a81dc788"} Nov 21 20:27:42 crc kubenswrapper[4727]: I1121 20:27:42.751664 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cc6c58b8b-47npz"] Nov 21 20:27:43 crc kubenswrapper[4727]: I1121 20:27:43.631087 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc6c58b8b-47npz" event={"ID":"c49bf05b-cf2c-4e00-a141-a7249d2eb68f","Type":"ContainerStarted","Data":"6c7c7b1dfbb34e2a6e7143e36ce8f0e2cb35dbf455525e62fc2f487bc927826b"} Nov 21 20:27:43 crc kubenswrapper[4727]: I1121 20:27:43.631818 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc6c58b8b-47npz" event={"ID":"c49bf05b-cf2c-4e00-a141-a7249d2eb68f","Type":"ContainerStarted","Data":"f132e98e47ec77a6db70feb35ecc7ccd938b5e0b608892d800dd74f921315b3f"} Nov 21 20:27:44 crc kubenswrapper[4727]: I1121 20:27:44.641532 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc6c58b8b-47npz" event={"ID":"c49bf05b-cf2c-4e00-a141-a7249d2eb68f","Type":"ContainerStarted","Data":"0074c553f9a7738ad0811e5577bb37a6a980f51e7a4a37f64b65dcb9f2287bd5"} Nov 21 20:27:44 crc kubenswrapper[4727]: I1121 20:27:44.642075 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:44 crc kubenswrapper[4727]: I1121 20:27:44.642106 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:44 crc kubenswrapper[4727]: I1121 20:27:44.644643 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bce1af82-ae9a-4eda-906c-e500a2727f27","Type":"ContainerStarted","Data":"401a7927f2971b874815c83b791587996a2c69b15c07e08d296581862a553320"} Nov 21 20:27:44 crc kubenswrapper[4727]: I1121 20:27:44.644887 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 20:27:44 crc kubenswrapper[4727]: I1121 20:27:44.660118 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cc6c58b8b-47npz" podStartSLOduration=3.660102712 podStartE2EDuration="3.660102712s" podCreationTimestamp="2025-11-21 20:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:44.657756956 +0000 UTC m=+1269.843941990" watchObservedRunningTime="2025-11-21 20:27:44.660102712 +0000 UTC m=+1269.846287746" Nov 21 20:27:44 crc kubenswrapper[4727]: I1121 20:27:44.685240 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.676806959 podStartE2EDuration="7.685216571s" podCreationTimestamp="2025-11-21 20:27:37 +0000 UTC" firstStartedPulling="2025-11-21 20:27:39.752151192 +0000 UTC m=+1264.938336236" lastFinishedPulling="2025-11-21 20:27:43.760560804 +0000 UTC m=+1268.946745848" observedRunningTime="2025-11-21 20:27:44.678506238 +0000 UTC m=+1269.864691282" watchObservedRunningTime="2025-11-21 20:27:44.685216571 +0000 UTC m=+1269.871401625" Nov 21 20:27:45 crc kubenswrapper[4727]: I1121 20:27:45.840057 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 20:27:45 crc kubenswrapper[4727]: I1121 20:27:45.842857 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.199:8080/\": dial tcp 10.217.0.199:8080: connect: connection refused" Nov 21 20:27:46 crc kubenswrapper[4727]: I1121 20:27:46.114202 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:27:46 crc kubenswrapper[4727]: I1121 20:27:46.127003 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 21 20:27:46 crc kubenswrapper[4727]: I1121 20:27:46.271653 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-tqzmz"] Nov 21 20:27:46 crc kubenswrapper[4727]: I1121 20:27:46.271881 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" podUID="4914babf-4a62-47e3-a89e-cb3faff1a26b" containerName="dnsmasq-dns" containerID="cri-o://6493e474cea267cbeabb363673ad40059d815601c51c951e0d2a448a5eafed40" gracePeriod=10 Nov 21 20:27:46 crc kubenswrapper[4727]: I1121 20:27:46.687027 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" event={"ID":"4914babf-4a62-47e3-a89e-cb3faff1a26b","Type":"ContainerDied","Data":"6493e474cea267cbeabb363673ad40059d815601c51c951e0d2a448a5eafed40"} Nov 21 20:27:46 crc kubenswrapper[4727]: I1121 20:27:46.686942 4727 generic.go:334] "Generic (PLEG): container finished" podID="4914babf-4a62-47e3-a89e-cb3faff1a26b" containerID="6493e474cea267cbeabb363673ad40059d815601c51c951e0d2a448a5eafed40" exitCode=0 Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.099762 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.114522 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-config\") pod \"4914babf-4a62-47e3-a89e-cb3faff1a26b\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.114610 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-nb\") pod \"4914babf-4a62-47e3-a89e-cb3faff1a26b\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.114666 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8zvn\" (UniqueName: \"kubernetes.io/projected/4914babf-4a62-47e3-a89e-cb3faff1a26b-kube-api-access-t8zvn\") pod \"4914babf-4a62-47e3-a89e-cb3faff1a26b\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.114683 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-sb\") pod \"4914babf-4a62-47e3-a89e-cb3faff1a26b\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.114769 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-swift-storage-0\") pod \"4914babf-4a62-47e3-a89e-cb3faff1a26b\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.114810 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-svc\") pod \"4914babf-4a62-47e3-a89e-cb3faff1a26b\" (UID: \"4914babf-4a62-47e3-a89e-cb3faff1a26b\") " Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.136154 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4914babf-4a62-47e3-a89e-cb3faff1a26b-kube-api-access-t8zvn" (OuterVolumeSpecName: "kube-api-access-t8zvn") pod "4914babf-4a62-47e3-a89e-cb3faff1a26b" (UID: "4914babf-4a62-47e3-a89e-cb3faff1a26b"). InnerVolumeSpecName "kube-api-access-t8zvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.210827 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4914babf-4a62-47e3-a89e-cb3faff1a26b" (UID: "4914babf-4a62-47e3-a89e-cb3faff1a26b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.217303 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.217333 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8zvn\" (UniqueName: \"kubernetes.io/projected/4914babf-4a62-47e3-a89e-cb3faff1a26b-kube-api-access-t8zvn\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.220455 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4914babf-4a62-47e3-a89e-cb3faff1a26b" (UID: "4914babf-4a62-47e3-a89e-cb3faff1a26b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.226411 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4914babf-4a62-47e3-a89e-cb3faff1a26b" (UID: "4914babf-4a62-47e3-a89e-cb3faff1a26b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.233695 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-config" (OuterVolumeSpecName: "config") pod "4914babf-4a62-47e3-a89e-cb3faff1a26b" (UID: "4914babf-4a62-47e3-a89e-cb3faff1a26b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.257214 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4914babf-4a62-47e3-a89e-cb3faff1a26b" (UID: "4914babf-4a62-47e3-a89e-cb3faff1a26b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.318332 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.318378 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.318403 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.318412 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4914babf-4a62-47e3-a89e-cb3faff1a26b-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.700649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" event={"ID":"4914babf-4a62-47e3-a89e-cb3faff1a26b","Type":"ContainerDied","Data":"1368bf28777697609762bfa841ed5d786356d207fd71575d751cd7c62b4beac8"} Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.700946 4727 scope.go:117] "RemoveContainer" containerID="6493e474cea267cbeabb363673ad40059d815601c51c951e0d2a448a5eafed40" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.700978 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-tqzmz" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.725026 4727 scope.go:117] "RemoveContainer" containerID="f8d67b027cf2b9481c9e6fd0062ab720daee9692b833c7907c57cb403101d743" Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.734063 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-tqzmz"] Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.746841 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-tqzmz"] Nov 21 20:27:47 crc kubenswrapper[4727]: I1121 20:27:47.850685 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:48 crc kubenswrapper[4727]: I1121 20:27:48.050233 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:49 crc kubenswrapper[4727]: I1121 20:27:49.537497 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4914babf-4a62-47e3-a89e-cb3faff1a26b" path="/var/lib/kubelet/pods/4914babf-4a62-47e3-a89e-cb3faff1a26b/volumes" Nov 21 20:27:49 crc kubenswrapper[4727]: I1121 20:27:49.976670 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 21 20:27:51 crc kubenswrapper[4727]: I1121 20:27:51.077236 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 20:27:51 crc kubenswrapper[4727]: I1121 20:27:51.150491 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 20:27:51 crc kubenswrapper[4727]: I1121 20:27:51.744313 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerName="cinder-scheduler" containerID="cri-o://4dc5b1e072ab77ae18e04b1f3acd8a7ff182ad15cddb4443e452392a74de4263" gracePeriod=30 Nov 21 20:27:51 crc kubenswrapper[4727]: I1121 20:27:51.744443 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerName="probe" containerID="cri-o://761e55b2dce91b5cc3d521b332f0606c97997c104aee2a7c65b79a43635b9ba7" gracePeriod=30 Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.425765 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.636682 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cc6c58b8b-47npz" Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.750167 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59c5b86cb6-fqnd7"] Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.750441 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59c5b86cb6-fqnd7" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerName="barbican-api-log" containerID="cri-o://d3d01d2ec5717e02e15f98823e2a16ac99c22596096b22e12ed9863161c31561" gracePeriod=30 Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.750959 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59c5b86cb6-fqnd7" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerName="barbican-api" containerID="cri-o://ff0ceab44c29e15592946609ed1f77dce143d2f10a94426cbc2705b7928467fd" gracePeriod=30 Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.795710 4727 generic.go:334] "Generic (PLEG): container finished" podID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerID="761e55b2dce91b5cc3d521b332f0606c97997c104aee2a7c65b79a43635b9ba7" exitCode=0 Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.795746 4727 generic.go:334] "Generic (PLEG): container finished" podID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerID="4dc5b1e072ab77ae18e04b1f3acd8a7ff182ad15cddb4443e452392a74de4263" exitCode=0 Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.796745 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fd7eeb-6d98-4398-af52-f369b88f6d37","Type":"ContainerDied","Data":"761e55b2dce91b5cc3d521b332f0606c97997c104aee2a7c65b79a43635b9ba7"} Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.796786 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fd7eeb-6d98-4398-af52-f369b88f6d37","Type":"ContainerDied","Data":"4dc5b1e072ab77ae18e04b1f3acd8a7ff182ad15cddb4443e452392a74de4263"} Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.796800 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fd7eeb-6d98-4398-af52-f369b88f6d37","Type":"ContainerDied","Data":"5b372bc7bf41aca1568d989d188bc16859b6f37467bfa3e4388668d70bc6ff1e"} Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.796811 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b372bc7bf41aca1568d989d188bc16859b6f37467bfa3e4388668d70bc6ff1e" Nov 21 20:27:53 crc kubenswrapper[4727]: I1121 20:27:53.868442 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.009728 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data\") pod \"98fd7eeb-6d98-4398-af52-f369b88f6d37\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.009784 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fd7eeb-6d98-4398-af52-f369b88f6d37-etc-machine-id\") pod \"98fd7eeb-6d98-4398-af52-f369b88f6d37\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.009814 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-scripts\") pod \"98fd7eeb-6d98-4398-af52-f369b88f6d37\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.009904 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb68f\" (UniqueName: \"kubernetes.io/projected/98fd7eeb-6d98-4398-af52-f369b88f6d37-kube-api-access-qb68f\") pod \"98fd7eeb-6d98-4398-af52-f369b88f6d37\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.009941 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-combined-ca-bundle\") pod \"98fd7eeb-6d98-4398-af52-f369b88f6d37\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.010009 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data-custom\") pod \"98fd7eeb-6d98-4398-af52-f369b88f6d37\" (UID: \"98fd7eeb-6d98-4398-af52-f369b88f6d37\") " Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.010050 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98fd7eeb-6d98-4398-af52-f369b88f6d37-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "98fd7eeb-6d98-4398-af52-f369b88f6d37" (UID: "98fd7eeb-6d98-4398-af52-f369b88f6d37"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.022259 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-scripts" (OuterVolumeSpecName: "scripts") pod "98fd7eeb-6d98-4398-af52-f369b88f6d37" (UID: "98fd7eeb-6d98-4398-af52-f369b88f6d37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.026059 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "98fd7eeb-6d98-4398-af52-f369b88f6d37" (UID: "98fd7eeb-6d98-4398-af52-f369b88f6d37"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.046109 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98fd7eeb-6d98-4398-af52-f369b88f6d37-kube-api-access-qb68f" (OuterVolumeSpecName: "kube-api-access-qb68f") pod "98fd7eeb-6d98-4398-af52-f369b88f6d37" (UID: "98fd7eeb-6d98-4398-af52-f369b88f6d37"). InnerVolumeSpecName "kube-api-access-qb68f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.113116 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fd7eeb-6d98-4398-af52-f369b88f6d37-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.113162 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.113175 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb68f\" (UniqueName: \"kubernetes.io/projected/98fd7eeb-6d98-4398-af52-f369b88f6d37-kube-api-access-qb68f\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.113186 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.154276 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98fd7eeb-6d98-4398-af52-f369b88f6d37" (UID: "98fd7eeb-6d98-4398-af52-f369b88f6d37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.217579 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.234475 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data" (OuterVolumeSpecName: "config-data") pod "98fd7eeb-6d98-4398-af52-f369b88f6d37" (UID: "98fd7eeb-6d98-4398-af52-f369b88f6d37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.320547 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fd7eeb-6d98-4398-af52-f369b88f6d37-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.817192 4727 generic.go:334] "Generic (PLEG): container finished" podID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerID="d3d01d2ec5717e02e15f98823e2a16ac99c22596096b22e12ed9863161c31561" exitCode=143 Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.817309 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.825560 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c5b86cb6-fqnd7" event={"ID":"a9717694-2709-4e01-a03c-ffd0a9ea5c42","Type":"ContainerDied","Data":"d3d01d2ec5717e02e15f98823e2a16ac99c22596096b22e12ed9863161c31561"} Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.867902 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.877832 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.911774 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 20:27:54 crc kubenswrapper[4727]: E1121 20:27:54.912603 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerName="cinder-scheduler" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.912626 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerName="cinder-scheduler" Nov 21 20:27:54 crc kubenswrapper[4727]: E1121 20:27:54.912639 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4914babf-4a62-47e3-a89e-cb3faff1a26b" containerName="init" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.912645 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4914babf-4a62-47e3-a89e-cb3faff1a26b" containerName="init" Nov 21 20:27:54 crc kubenswrapper[4727]: E1121 20:27:54.912663 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerName="probe" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.912672 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerName="probe" Nov 21 20:27:54 crc kubenswrapper[4727]: E1121 20:27:54.912685 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4914babf-4a62-47e3-a89e-cb3faff1a26b" containerName="dnsmasq-dns" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.912695 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4914babf-4a62-47e3-a89e-cb3faff1a26b" containerName="dnsmasq-dns" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.913009 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4914babf-4a62-47e3-a89e-cb3faff1a26b" containerName="dnsmasq-dns" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.913042 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerName="probe" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.913060 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="98fd7eeb-6d98-4398-af52-f369b88f6d37" containerName="cinder-scheduler" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.914285 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.916907 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 21 20:27:54 crc kubenswrapper[4727]: I1121 20:27:54.926697 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.047976 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3570e4bf-cf68-4cdf-a37d-f63090685a4c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.048280 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf6kv\" (UniqueName: \"kubernetes.io/projected/3570e4bf-cf68-4cdf-a37d-f63090685a4c-kube-api-access-nf6kv\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.048339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-scripts\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.048370 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.048388 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-config-data\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.048416 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.150274 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf6kv\" (UniqueName: \"kubernetes.io/projected/3570e4bf-cf68-4cdf-a37d-f63090685a4c-kube-api-access-nf6kv\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.150381 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-scripts\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.150443 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.150473 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-config-data\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.150517 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.150677 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3570e4bf-cf68-4cdf-a37d-f63090685a4c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.150800 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3570e4bf-cf68-4cdf-a37d-f63090685a4c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.155448 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-scripts\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.155528 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.159462 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.169249 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3570e4bf-cf68-4cdf-a37d-f63090685a4c-config-data\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.183150 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf6kv\" (UniqueName: \"kubernetes.io/projected/3570e4bf-cf68-4cdf-a37d-f63090685a4c-kube-api-access-nf6kv\") pod \"cinder-scheduler-0\" (UID: \"3570e4bf-cf68-4cdf-a37d-f63090685a4c\") " pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.245583 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.532822 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98fd7eeb-6d98-4398-af52-f369b88f6d37" path="/var/lib/kubelet/pods/98fd7eeb-6d98-4398-af52-f369b88f6d37/volumes" Nov 21 20:27:55 crc kubenswrapper[4727]: I1121 20:27:55.879311 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 20:27:56 crc kubenswrapper[4727]: I1121 20:27:56.804007 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:56 crc kubenswrapper[4727]: I1121 20:27:56.812689 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59895c4888-ffr5c" Nov 21 20:27:56 crc kubenswrapper[4727]: I1121 20:27:56.913057 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3570e4bf-cf68-4cdf-a37d-f63090685a4c","Type":"ContainerStarted","Data":"a6362a981087ea00562d5040bc4d1689fa4edf1bd948b55e8a4e1465dd07e53e"} Nov 21 20:27:56 crc kubenswrapper[4727]: I1121 20:27:56.913415 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3570e4bf-cf68-4cdf-a37d-f63090685a4c","Type":"ContainerStarted","Data":"d1edf9f2d9e1a38027ce5e3757828aa22dddb9272c1f09712469fa756d6b31bf"} Nov 21 20:27:57 crc kubenswrapper[4727]: I1121 20:27:57.452851 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59c5b86cb6-fqnd7" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.198:9311/healthcheck\": read tcp 10.217.0.2:45686->10.217.0.198:9311: read: connection reset by peer" Nov 21 20:27:57 crc kubenswrapper[4727]: I1121 20:27:57.452878 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59c5b86cb6-fqnd7" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.198:9311/healthcheck\": read tcp 10.217.0.2:45688->10.217.0.198:9311: read: connection reset by peer" Nov 21 20:27:57 crc kubenswrapper[4727]: I1121 20:27:57.927185 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3570e4bf-cf68-4cdf-a37d-f63090685a4c","Type":"ContainerStarted","Data":"add3f04b28ed4c53e9d3ad73b384e564d65876ce9c5e38c98e48559ff12a10bb"} Nov 21 20:27:57 crc kubenswrapper[4727]: I1121 20:27:57.929398 4727 generic.go:334] "Generic (PLEG): container finished" podID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerID="ff0ceab44c29e15592946609ed1f77dce143d2f10a94426cbc2705b7928467fd" exitCode=0 Nov 21 20:27:57 crc kubenswrapper[4727]: I1121 20:27:57.929434 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c5b86cb6-fqnd7" event={"ID":"a9717694-2709-4e01-a03c-ffd0a9ea5c42","Type":"ContainerDied","Data":"ff0ceab44c29e15592946609ed1f77dce143d2f10a94426cbc2705b7928467fd"} Nov 21 20:27:57 crc kubenswrapper[4727]: I1121 20:27:57.929454 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c5b86cb6-fqnd7" event={"ID":"a9717694-2709-4e01-a03c-ffd0a9ea5c42","Type":"ContainerDied","Data":"a2828c1c58509b1cd6477398db173cf4cac65adab3a6d1d843f298e581e731aa"} Nov 21 20:27:57 crc kubenswrapper[4727]: I1121 20:27:57.929473 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2828c1c58509b1cd6477398db173cf4cac65adab3a6d1d843f298e581e731aa" Nov 21 20:27:57 crc kubenswrapper[4727]: I1121 20:27:57.955984 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.955945795 podStartE2EDuration="3.955945795s" podCreationTimestamp="2025-11-21 20:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:27:57.94991812 +0000 UTC m=+1283.136103164" watchObservedRunningTime="2025-11-21 20:27:57.955945795 +0000 UTC m=+1283.142130839" Nov 21 20:27:57 crc kubenswrapper[4727]: I1121 20:27:57.996728 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.051228 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-cc454d9c9-g5hjz" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.060564 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9717694-2709-4e01-a03c-ffd0a9ea5c42-logs\") pod \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.060613 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-combined-ca-bundle\") pod \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.060637 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpw9l\" (UniqueName: \"kubernetes.io/projected/a9717694-2709-4e01-a03c-ffd0a9ea5c42-kube-api-access-mpw9l\") pod \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.060721 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data-custom\") pod \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.060813 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data\") pod \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\" (UID: \"a9717694-2709-4e01-a03c-ffd0a9ea5c42\") " Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.062154 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9717694-2709-4e01-a03c-ffd0a9ea5c42-logs" (OuterVolumeSpecName: "logs") pod "a9717694-2709-4e01-a03c-ffd0a9ea5c42" (UID: "a9717694-2709-4e01-a03c-ffd0a9ea5c42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.074194 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a9717694-2709-4e01-a03c-ffd0a9ea5c42" (UID: "a9717694-2709-4e01-a03c-ffd0a9ea5c42"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.102993 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9717694-2709-4e01-a03c-ffd0a9ea5c42" (UID: "a9717694-2709-4e01-a03c-ffd0a9ea5c42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.118120 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9717694-2709-4e01-a03c-ffd0a9ea5c42-kube-api-access-mpw9l" (OuterVolumeSpecName: "kube-api-access-mpw9l") pod "a9717694-2709-4e01-a03c-ffd0a9ea5c42" (UID: "a9717694-2709-4e01-a03c-ffd0a9ea5c42"). InnerVolumeSpecName "kube-api-access-mpw9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.163817 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9717694-2709-4e01-a03c-ffd0a9ea5c42-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.164084 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.164154 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpw9l\" (UniqueName: \"kubernetes.io/projected/a9717694-2709-4e01-a03c-ffd0a9ea5c42-kube-api-access-mpw9l\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.164213 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.215149 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data" (OuterVolumeSpecName: "config-data") pod "a9717694-2709-4e01-a03c-ffd0a9ea5c42" (UID: "a9717694-2709-4e01-a03c-ffd0a9ea5c42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.266557 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9717694-2709-4e01-a03c-ffd0a9ea5c42-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.938880 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c5b86cb6-fqnd7" Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.972686 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59c5b86cb6-fqnd7"] Nov 21 20:27:58 crc kubenswrapper[4727]: I1121 20:27:58.981972 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-59c5b86cb6-fqnd7"] Nov 21 20:27:59 crc kubenswrapper[4727]: I1121 20:27:59.515290 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" path="/var/lib/kubelet/pods/a9717694-2709-4e01-a03c-ffd0a9ea5c42/volumes" Nov 21 20:28:00 crc kubenswrapper[4727]: I1121 20:28:00.246517 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.318606 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 21 20:28:02 crc kubenswrapper[4727]: E1121 20:28:02.319350 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerName="barbican-api" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.319364 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerName="barbican-api" Nov 21 20:28:02 crc kubenswrapper[4727]: E1121 20:28:02.319378 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerName="barbican-api-log" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.319384 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerName="barbican-api-log" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.319609 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerName="barbican-api-log" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.319628 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9717694-2709-4e01-a03c-ffd0a9ea5c42" containerName="barbican-api" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.320343 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.323497 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lsz55" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.325269 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.335252 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.343120 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.457533 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.457597 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.457624 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config-secret\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.457664 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlrht\" (UniqueName: \"kubernetes.io/projected/e9a87496-67fd-477f-8de2-21e51d0af200-kube-api-access-nlrht\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.559166 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.559223 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.559248 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config-secret\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.559284 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlrht\" (UniqueName: \"kubernetes.io/projected/e9a87496-67fd-477f-8de2-21e51d0af200-kube-api-access-nlrht\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.560933 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.565232 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.567777 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config-secret\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.578271 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlrht\" (UniqueName: \"kubernetes.io/projected/e9a87496-67fd-477f-8de2-21e51d0af200-kube-api-access-nlrht\") pod \"openstackclient\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.642491 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.713811 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.748335 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.764044 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.765394 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: E1121 20:28:02.811089 4727 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 21 20:28:02 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_e9a87496-67fd-477f-8de2-21e51d0af200_0(84be45d164fc0682916ed81d7167f93fe23a97ee248eeac445e040fdee76ee23): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"84be45d164fc0682916ed81d7167f93fe23a97ee248eeac445e040fdee76ee23" Netns:"/var/run/netns/5b51e86c-e111-4e5b-b43d-7bc36076e4ac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=84be45d164fc0682916ed81d7167f93fe23a97ee248eeac445e040fdee76ee23;K8S_POD_UID=e9a87496-67fd-477f-8de2-21e51d0af200" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/e9a87496-67fd-477f-8de2-21e51d0af200]: expected pod UID "e9a87496-67fd-477f-8de2-21e51d0af200" but got "a4349594-5d5b-4a77-8571-88be061ab039" from Kube API Nov 21 20:28:02 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 21 20:28:02 crc kubenswrapper[4727]: > Nov 21 20:28:02 crc kubenswrapper[4727]: E1121 20:28:02.811192 4727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 21 20:28:02 crc kubenswrapper[4727]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_e9a87496-67fd-477f-8de2-21e51d0af200_0(84be45d164fc0682916ed81d7167f93fe23a97ee248eeac445e040fdee76ee23): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"84be45d164fc0682916ed81d7167f93fe23a97ee248eeac445e040fdee76ee23" Netns:"/var/run/netns/5b51e86c-e111-4e5b-b43d-7bc36076e4ac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=84be45d164fc0682916ed81d7167f93fe23a97ee248eeac445e040fdee76ee23;K8S_POD_UID=e9a87496-67fd-477f-8de2-21e51d0af200" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/e9a87496-67fd-477f-8de2-21e51d0af200]: expected pod UID "e9a87496-67fd-477f-8de2-21e51d0af200" but got "a4349594-5d5b-4a77-8571-88be061ab039" from Kube API Nov 21 20:28:02 crc kubenswrapper[4727]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 21 20:28:02 crc kubenswrapper[4727]: > pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.826956 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.866433 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzvd\" (UniqueName: \"kubernetes.io/projected/a4349594-5d5b-4a77-8571-88be061ab039-kube-api-access-sjzvd\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.866862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4349594-5d5b-4a77-8571-88be061ab039-openstack-config-secret\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.867016 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4349594-5d5b-4a77-8571-88be061ab039-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.867136 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4349594-5d5b-4a77-8571-88be061ab039-openstack-config\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.969280 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzvd\" (UniqueName: \"kubernetes.io/projected/a4349594-5d5b-4a77-8571-88be061ab039-kube-api-access-sjzvd\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.969356 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4349594-5d5b-4a77-8571-88be061ab039-openstack-config-secret\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.969376 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4349594-5d5b-4a77-8571-88be061ab039-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.969393 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4349594-5d5b-4a77-8571-88be061ab039-openstack-config\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.970468 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a4349594-5d5b-4a77-8571-88be061ab039-openstack-config\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.973718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a4349594-5d5b-4a77-8571-88be061ab039-openstack-config-secret\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.974536 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4349594-5d5b-4a77-8571-88be061ab039-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:02 crc kubenswrapper[4727]: I1121 20:28:02.987733 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzvd\" (UniqueName: \"kubernetes.io/projected/a4349594-5d5b-4a77-8571-88be061ab039-kube-api-access-sjzvd\") pod \"openstackclient\" (UID: \"a4349594-5d5b-4a77-8571-88be061ab039\") " pod="openstack/openstackclient" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.011103 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.017072 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="e9a87496-67fd-477f-8de2-21e51d0af200" podUID="a4349594-5d5b-4a77-8571-88be061ab039" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.022753 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.071788 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config-secret\") pod \"e9a87496-67fd-477f-8de2-21e51d0af200\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.071989 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlrht\" (UniqueName: \"kubernetes.io/projected/e9a87496-67fd-477f-8de2-21e51d0af200-kube-api-access-nlrht\") pod \"e9a87496-67fd-477f-8de2-21e51d0af200\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.072029 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-combined-ca-bundle\") pod \"e9a87496-67fd-477f-8de2-21e51d0af200\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.072227 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config\") pod \"e9a87496-67fd-477f-8de2-21e51d0af200\" (UID: \"e9a87496-67fd-477f-8de2-21e51d0af200\") " Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.073530 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e9a87496-67fd-477f-8de2-21e51d0af200" (UID: "e9a87496-67fd-477f-8de2-21e51d0af200"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.079132 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e9a87496-67fd-477f-8de2-21e51d0af200" (UID: "e9a87496-67fd-477f-8de2-21e51d0af200"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.079259 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a87496-67fd-477f-8de2-21e51d0af200-kube-api-access-nlrht" (OuterVolumeSpecName: "kube-api-access-nlrht") pod "e9a87496-67fd-477f-8de2-21e51d0af200" (UID: "e9a87496-67fd-477f-8de2-21e51d0af200"). InnerVolumeSpecName "kube-api-access-nlrht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.080102 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9a87496-67fd-477f-8de2-21e51d0af200" (UID: "e9a87496-67fd-477f-8de2-21e51d0af200"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.153040 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.188272 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.188313 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.188328 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlrht\" (UniqueName: \"kubernetes.io/projected/e9a87496-67fd-477f-8de2-21e51d0af200-kube-api-access-nlrht\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.188340 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a87496-67fd-477f-8de2-21e51d0af200-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.516631 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a87496-67fd-477f-8de2-21e51d0af200" path="/var/lib/kubelet/pods/e9a87496-67fd-477f-8de2-21e51d0af200/volumes" Nov 21 20:28:03 crc kubenswrapper[4727]: I1121 20:28:03.735670 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 20:28:03 crc kubenswrapper[4727]: W1121 20:28:03.737918 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4349594_5d5b_4a77_8571_88be061ab039.slice/crio-23d534c70970a029c28a5b0ea008f9eaa02b0fa2458990b88b6df7fb346da2e9 WatchSource:0}: Error finding container 23d534c70970a029c28a5b0ea008f9eaa02b0fa2458990b88b6df7fb346da2e9: Status 404 returned error can't find the container with id 23d534c70970a029c28a5b0ea008f9eaa02b0fa2458990b88b6df7fb346da2e9 Nov 21 20:28:04 crc kubenswrapper[4727]: I1121 20:28:04.025440 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a4349594-5d5b-4a77-8571-88be061ab039","Type":"ContainerStarted","Data":"23d534c70970a029c28a5b0ea008f9eaa02b0fa2458990b88b6df7fb346da2e9"} Nov 21 20:28:04 crc kubenswrapper[4727]: I1121 20:28:04.025477 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 20:28:04 crc kubenswrapper[4727]: I1121 20:28:04.035683 4727 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="e9a87496-67fd-477f-8de2-21e51d0af200" podUID="a4349594-5d5b-4a77-8571-88be061ab039" Nov 21 20:28:05 crc kubenswrapper[4727]: I1121 20:28:05.514217 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 20:28:06 crc kubenswrapper[4727]: I1121 20:28:06.146696 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:06 crc kubenswrapper[4727]: I1121 20:28:06.148128 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="ceilometer-central-agent" containerID="cri-o://09df844fb72c99fbb281a5d5d2262f31e73fc33a9962897ebf4ad4ca2530c41b" gracePeriod=30 Nov 21 20:28:06 crc kubenswrapper[4727]: I1121 20:28:06.148179 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="proxy-httpd" containerID="cri-o://401a7927f2971b874815c83b791587996a2c69b15c07e08d296581862a553320" gracePeriod=30 Nov 21 20:28:06 crc kubenswrapper[4727]: I1121 20:28:06.148229 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="sg-core" containerID="cri-o://48ba325f33bfd261de135825539fe6159d5408213e28c733ddcdad1dcba3a1fa" gracePeriod=30 Nov 21 20:28:06 crc kubenswrapper[4727]: I1121 20:28:06.148242 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="ceilometer-notification-agent" containerID="cri-o://a90fadad2fb926a3723ac0221e60db1ea2c37ff7d0dfd0bb5cffe015a81dc788" gracePeriod=30 Nov 21 20:28:06 crc kubenswrapper[4727]: I1121 20:28:06.164102 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": EOF" Nov 21 20:28:07 crc kubenswrapper[4727]: I1121 20:28:07.061111 4727 generic.go:334] "Generic (PLEG): container finished" podID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerID="401a7927f2971b874815c83b791587996a2c69b15c07e08d296581862a553320" exitCode=0 Nov 21 20:28:07 crc kubenswrapper[4727]: I1121 20:28:07.061427 4727 generic.go:334] "Generic (PLEG): container finished" podID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerID="48ba325f33bfd261de135825539fe6159d5408213e28c733ddcdad1dcba3a1fa" exitCode=2 Nov 21 20:28:07 crc kubenswrapper[4727]: I1121 20:28:07.061437 4727 generic.go:334] "Generic (PLEG): container finished" podID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerID="09df844fb72c99fbb281a5d5d2262f31e73fc33a9962897ebf4ad4ca2530c41b" exitCode=0 Nov 21 20:28:07 crc kubenswrapper[4727]: I1121 20:28:07.061207 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bce1af82-ae9a-4eda-906c-e500a2727f27","Type":"ContainerDied","Data":"401a7927f2971b874815c83b791587996a2c69b15c07e08d296581862a553320"} Nov 21 20:28:07 crc kubenswrapper[4727]: I1121 20:28:07.061487 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bce1af82-ae9a-4eda-906c-e500a2727f27","Type":"ContainerDied","Data":"48ba325f33bfd261de135825539fe6159d5408213e28c733ddcdad1dcba3a1fa"} Nov 21 20:28:07 crc kubenswrapper[4727]: I1121 20:28:07.061501 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bce1af82-ae9a-4eda-906c-e500a2727f27","Type":"ContainerDied","Data":"09df844fb72c99fbb281a5d5d2262f31e73fc33a9962897ebf4ad4ca2530c41b"} Nov 21 20:28:07 crc kubenswrapper[4727]: I1121 20:28:07.603486 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-8476b67874-f2dtk" podUID="b4d6995f-166c-410a-adee-3733a25c28df" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.191:9696/\": dial tcp 10.217.0.191:9696: connect: connection refused" Nov 21 20:28:07 crc kubenswrapper[4727]: I1121 20:28:07.738860 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": dial tcp 10.217.0.202:3000: connect: connection refused" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.524898 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-9c89d96c5-h6lxw"] Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.532625 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.536239 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.536293 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.536317 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.539487 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-9c89d96c5-h6lxw"] Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.634432 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-run-httpd\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.634481 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-etc-swift\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.634901 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-log-httpd\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.635037 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-combined-ca-bundle\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.635093 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-public-tls-certs\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.635160 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-internal-tls-certs\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.635229 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llk6d\" (UniqueName: \"kubernetes.io/projected/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-kube-api-access-llk6d\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.635272 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-config-data\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.737320 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-log-httpd\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.737377 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-combined-ca-bundle\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.737402 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-public-tls-certs\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.737437 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-internal-tls-certs\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.737476 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llk6d\" (UniqueName: \"kubernetes.io/projected/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-kube-api-access-llk6d\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.737505 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-config-data\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.737585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-run-httpd\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.737614 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-etc-swift\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.738071 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-run-httpd\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.738900 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-log-httpd\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.746733 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-config-data\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.750509 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-public-tls-certs\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.750706 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-combined-ca-bundle\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.752199 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-internal-tls-certs\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.754029 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-etc-swift\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.757807 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llk6d\" (UniqueName: \"kubernetes.io/projected/d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8-kube-api-access-llk6d\") pod \"swift-proxy-9c89d96c5-h6lxw\" (UID: \"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8\") " pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:08 crc kubenswrapper[4727]: I1121 20:28:08.861276 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:11 crc kubenswrapper[4727]: I1121 20:28:11.129211 4727 generic.go:334] "Generic (PLEG): container finished" podID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerID="a90fadad2fb926a3723ac0221e60db1ea2c37ff7d0dfd0bb5cffe015a81dc788" exitCode=0 Nov 21 20:28:11 crc kubenswrapper[4727]: I1121 20:28:11.129641 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bce1af82-ae9a-4eda-906c-e500a2727f27","Type":"ContainerDied","Data":"a90fadad2fb926a3723ac0221e60db1ea2c37ff7d0dfd0bb5cffe015a81dc788"} Nov 21 20:28:11 crc kubenswrapper[4727]: I1121 20:28:11.132572 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8476b67874-f2dtk_b4d6995f-166c-410a-adee-3733a25c28df/neutron-api/0.log" Nov 21 20:28:11 crc kubenswrapper[4727]: I1121 20:28:11.132626 4727 generic.go:334] "Generic (PLEG): container finished" podID="b4d6995f-166c-410a-adee-3733a25c28df" containerID="364f5a4a47bb2ca8f0327214f490df2d67e76c1d1aa242e30cd8041e08a9bad4" exitCode=137 Nov 21 20:28:11 crc kubenswrapper[4727]: I1121 20:28:11.132657 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8476b67874-f2dtk" event={"ID":"b4d6995f-166c-410a-adee-3733a25c28df","Type":"ContainerDied","Data":"364f5a4a47bb2ca8f0327214f490df2d67e76c1d1aa242e30cd8041e08a9bad4"} Nov 21 20:28:11 crc kubenswrapper[4727]: E1121 20:28:11.894111 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20e5ca35_9016_499e_908a_15ad18d11d04.slice/crio-conmon-b4816a197616bd3e75b9a4bbf9b7b1e6fc961138bccc0ce948bc961650827314.scope\": RecentStats: unable to find data in memory cache]" Nov 21 20:28:12 crc kubenswrapper[4727]: I1121 20:28:12.147700 4727 generic.go:334] "Generic (PLEG): container finished" podID="20e5ca35-9016-499e-908a-15ad18d11d04" containerID="b4816a197616bd3e75b9a4bbf9b7b1e6fc961138bccc0ce948bc961650827314" exitCode=137 Nov 21 20:28:12 crc kubenswrapper[4727]: I1121 20:28:12.147766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"20e5ca35-9016-499e-908a-15ad18d11d04","Type":"ContainerDied","Data":"b4816a197616bd3e75b9a4bbf9b7b1e6fc961138bccc0ce948bc961650827314"} Nov 21 20:28:12 crc kubenswrapper[4727]: I1121 20:28:12.597050 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:28:12 crc kubenswrapper[4727]: I1121 20:28:12.597317 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" containerName="glance-log" containerID="cri-o://7bfec23f6aa8aebd03ea88d77fdaa02c68e0dfb290cd68fc9d15e404684d0cc4" gracePeriod=30 Nov 21 20:28:12 crc kubenswrapper[4727]: I1121 20:28:12.597385 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" containerName="glance-httpd" containerID="cri-o://b07957761d5ecd2b6177d21677c95483a2b9b6b460eef9fa8fa7cdddf2eeecf9" gracePeriod=30 Nov 21 20:28:13 crc kubenswrapper[4727]: I1121 20:28:13.161099 4727 generic.go:334] "Generic (PLEG): container finished" podID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" containerID="7bfec23f6aa8aebd03ea88d77fdaa02c68e0dfb290cd68fc9d15e404684d0cc4" exitCode=143 Nov 21 20:28:13 crc kubenswrapper[4727]: I1121 20:28:13.161199 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7909040-500b-4a0d-878f-9a4c2d8b6a9b","Type":"ContainerDied","Data":"7bfec23f6aa8aebd03ea88d77fdaa02c68e0dfb290cd68fc9d15e404684d0cc4"} Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.424884 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.433924 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.469042 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-sg-core-conf-yaml\") pod \"bce1af82-ae9a-4eda-906c-e500a2727f27\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.469235 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-log-httpd\") pod \"bce1af82-ae9a-4eda-906c-e500a2727f27\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.469289 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rfjb\" (UniqueName: \"kubernetes.io/projected/bce1af82-ae9a-4eda-906c-e500a2727f27-kube-api-access-2rfjb\") pod \"bce1af82-ae9a-4eda-906c-e500a2727f27\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.469307 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-run-httpd\") pod \"bce1af82-ae9a-4eda-906c-e500a2727f27\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.469333 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-scripts\") pod \"bce1af82-ae9a-4eda-906c-e500a2727f27\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.469388 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-combined-ca-bundle\") pod \"bce1af82-ae9a-4eda-906c-e500a2727f27\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.469433 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-config-data\") pod \"bce1af82-ae9a-4eda-906c-e500a2727f27\" (UID: \"bce1af82-ae9a-4eda-906c-e500a2727f27\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.474249 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bce1af82-ae9a-4eda-906c-e500a2727f27" (UID: "bce1af82-ae9a-4eda-906c-e500a2727f27"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.475159 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bce1af82-ae9a-4eda-906c-e500a2727f27" (UID: "bce1af82-ae9a-4eda-906c-e500a2727f27"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.478121 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-scripts" (OuterVolumeSpecName: "scripts") pod "bce1af82-ae9a-4eda-906c-e500a2727f27" (UID: "bce1af82-ae9a-4eda-906c-e500a2727f27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.480079 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce1af82-ae9a-4eda-906c-e500a2727f27-kube-api-access-2rfjb" (OuterVolumeSpecName: "kube-api-access-2rfjb") pod "bce1af82-ae9a-4eda-906c-e500a2727f27" (UID: "bce1af82-ae9a-4eda-906c-e500a2727f27"). InnerVolumeSpecName "kube-api-access-2rfjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.513217 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bce1af82-ae9a-4eda-906c-e500a2727f27" (UID: "bce1af82-ae9a-4eda-906c-e500a2727f27"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.571435 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-combined-ca-bundle\") pod \"20e5ca35-9016-499e-908a-15ad18d11d04\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.571611 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20e5ca35-9016-499e-908a-15ad18d11d04-logs\") pod \"20e5ca35-9016-499e-908a-15ad18d11d04\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.571637 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-scripts\") pod \"20e5ca35-9016-499e-908a-15ad18d11d04\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.571661 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vvk9\" (UniqueName: \"kubernetes.io/projected/20e5ca35-9016-499e-908a-15ad18d11d04-kube-api-access-8vvk9\") pod \"20e5ca35-9016-499e-908a-15ad18d11d04\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.571680 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data-custom\") pod \"20e5ca35-9016-499e-908a-15ad18d11d04\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.571835 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data\") pod \"20e5ca35-9016-499e-908a-15ad18d11d04\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.571852 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20e5ca35-9016-499e-908a-15ad18d11d04-etc-machine-id\") pod \"20e5ca35-9016-499e-908a-15ad18d11d04\" (UID: \"20e5ca35-9016-499e-908a-15ad18d11d04\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.572414 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e5ca35-9016-499e-908a-15ad18d11d04-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "20e5ca35-9016-499e-908a-15ad18d11d04" (UID: "20e5ca35-9016-499e-908a-15ad18d11d04"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.573084 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e5ca35-9016-499e-908a-15ad18d11d04-logs" (OuterVolumeSpecName: "logs") pod "20e5ca35-9016-499e-908a-15ad18d11d04" (UID: "20e5ca35-9016-499e-908a-15ad18d11d04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.573303 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.573876 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rfjb\" (UniqueName: \"kubernetes.io/projected/bce1af82-ae9a-4eda-906c-e500a2727f27-kube-api-access-2rfjb\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.573898 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bce1af82-ae9a-4eda-906c-e500a2727f27-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.573909 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.573919 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20e5ca35-9016-499e-908a-15ad18d11d04-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.573928 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.573937 4727 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20e5ca35-9016-499e-908a-15ad18d11d04-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.576117 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e5ca35-9016-499e-908a-15ad18d11d04-kube-api-access-8vvk9" (OuterVolumeSpecName: "kube-api-access-8vvk9") pod "20e5ca35-9016-499e-908a-15ad18d11d04" (UID: "20e5ca35-9016-499e-908a-15ad18d11d04"). InnerVolumeSpecName "kube-api-access-8vvk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.577745 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "20e5ca35-9016-499e-908a-15ad18d11d04" (UID: "20e5ca35-9016-499e-908a-15ad18d11d04"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.579878 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-scripts" (OuterVolumeSpecName: "scripts") pod "20e5ca35-9016-499e-908a-15ad18d11d04" (UID: "20e5ca35-9016-499e-908a-15ad18d11d04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.593936 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bce1af82-ae9a-4eda-906c-e500a2727f27" (UID: "bce1af82-ae9a-4eda-906c-e500a2727f27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.609591 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20e5ca35-9016-499e-908a-15ad18d11d04" (UID: "20e5ca35-9016-499e-908a-15ad18d11d04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.616607 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-config-data" (OuterVolumeSpecName: "config-data") pod "bce1af82-ae9a-4eda-906c-e500a2727f27" (UID: "bce1af82-ae9a-4eda-906c-e500a2727f27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.631159 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8476b67874-f2dtk_b4d6995f-166c-410a-adee-3733a25c28df/neutron-api/0.log" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.631223 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.658692 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data" (OuterVolumeSpecName: "config-data") pod "20e5ca35-9016-499e-908a-15ad18d11d04" (UID: "20e5ca35-9016-499e-908a-15ad18d11d04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.675465 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-config\") pod \"b4d6995f-166c-410a-adee-3733a25c28df\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.675541 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-combined-ca-bundle\") pod \"b4d6995f-166c-410a-adee-3733a25c28df\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.675616 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kdk8\" (UniqueName: \"kubernetes.io/projected/b4d6995f-166c-410a-adee-3733a25c28df-kube-api-access-4kdk8\") pod \"b4d6995f-166c-410a-adee-3733a25c28df\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.675665 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-ovndb-tls-certs\") pod \"b4d6995f-166c-410a-adee-3733a25c28df\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.675857 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-httpd-config\") pod \"b4d6995f-166c-410a-adee-3733a25c28df\" (UID: \"b4d6995f-166c-410a-adee-3733a25c28df\") " Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.676337 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.676357 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.676367 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.676376 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce1af82-ae9a-4eda-906c-e500a2727f27-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.676384 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.676395 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vvk9\" (UniqueName: \"kubernetes.io/projected/20e5ca35-9016-499e-908a-15ad18d11d04-kube-api-access-8vvk9\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.676403 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20e5ca35-9016-499e-908a-15ad18d11d04-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.679710 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b4d6995f-166c-410a-adee-3733a25c28df" (UID: "b4d6995f-166c-410a-adee-3733a25c28df"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.680034 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d6995f-166c-410a-adee-3733a25c28df-kube-api-access-4kdk8" (OuterVolumeSpecName: "kube-api-access-4kdk8") pod "b4d6995f-166c-410a-adee-3733a25c28df" (UID: "b4d6995f-166c-410a-adee-3733a25c28df"). InnerVolumeSpecName "kube-api-access-4kdk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.734791 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-config" (OuterVolumeSpecName: "config") pod "b4d6995f-166c-410a-adee-3733a25c28df" (UID: "b4d6995f-166c-410a-adee-3733a25c28df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.737909 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4d6995f-166c-410a-adee-3733a25c28df" (UID: "b4d6995f-166c-410a-adee-3733a25c28df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.770708 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b4d6995f-166c-410a-adee-3733a25c28df" (UID: "b4d6995f-166c-410a-adee-3733a25c28df"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.778251 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.778280 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.778290 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.778300 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kdk8\" (UniqueName: \"kubernetes.io/projected/b4d6995f-166c-410a-adee-3733a25c28df-kube-api-access-4kdk8\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.778309 4727 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d6995f-166c-410a-adee-3733a25c28df-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.862688 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-9c89d96c5-h6lxw"] Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.999333 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.999601 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="407cfc42-937a-4e61-b257-042675020db0" containerName="glance-log" containerID="cri-o://6d9f218a85ef4f630e3c8e515cb83e57b79860124f0dea59b6d3cbf362fe1094" gracePeriod=30 Nov 21 20:28:14 crc kubenswrapper[4727]: I1121 20:28:14.999736 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="407cfc42-937a-4e61-b257-042675020db0" containerName="glance-httpd" containerID="cri-o://5e540f8a1538066950326c07069aa0bf770f19fd10b217f424b71c2982dd7bfb" gracePeriod=30 Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.207710 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-9c89d96c5-h6lxw" event={"ID":"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8","Type":"ContainerStarted","Data":"548c91200cac01cc41ff8fc66f421a6bd1a5bc32208e5404648a234346e168a7"} Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.208091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-9c89d96c5-h6lxw" event={"ID":"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8","Type":"ContainerStarted","Data":"c25f5abaa1e42c95dfa6e0623d07ab2d0f736d30277a878f38d4dae163b888ae"} Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.211524 4727 generic.go:334] "Generic (PLEG): container finished" podID="407cfc42-937a-4e61-b257-042675020db0" containerID="6d9f218a85ef4f630e3c8e515cb83e57b79860124f0dea59b6d3cbf362fe1094" exitCode=143 Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.211628 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"407cfc42-937a-4e61-b257-042675020db0","Type":"ContainerDied","Data":"6d9f218a85ef4f630e3c8e515cb83e57b79860124f0dea59b6d3cbf362fe1094"} Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.213563 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8476b67874-f2dtk_b4d6995f-166c-410a-adee-3733a25c28df/neutron-api/0.log" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.213643 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8476b67874-f2dtk" event={"ID":"b4d6995f-166c-410a-adee-3733a25c28df","Type":"ContainerDied","Data":"289f70c671ce4d1159ffaba84f9a15426cf2fb0fe16e2278cf9db832c919720a"} Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.213677 4727 scope.go:117] "RemoveContainer" containerID="dfe330a320ec02ed64a189754512202a3865e294e595fac2ad0da8d964dd0603" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.213826 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8476b67874-f2dtk" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.218259 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a4349594-5d5b-4a77-8571-88be061ab039","Type":"ContainerStarted","Data":"4eeade6f6abdbbcbce154e97c186654a43f73582a025048decf44f86441e7d9b"} Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.222703 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.222713 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"20e5ca35-9016-499e-908a-15ad18d11d04","Type":"ContainerDied","Data":"a669ec1e5152008f20fcb8654db1d75659d22ece74a54d947f34a63d565fecee"} Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.229910 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bce1af82-ae9a-4eda-906c-e500a2727f27","Type":"ContainerDied","Data":"255a4285971958a34687aa10855c3a5749ba59c104a8da55232dd211190ae0f3"} Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.230029 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.255705 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.897549617 podStartE2EDuration="13.255685603s" podCreationTimestamp="2025-11-21 20:28:02 +0000 UTC" firstStartedPulling="2025-11-21 20:28:03.739482125 +0000 UTC m=+1288.925667169" lastFinishedPulling="2025-11-21 20:28:14.097618111 +0000 UTC m=+1299.283803155" observedRunningTime="2025-11-21 20:28:15.237031062 +0000 UTC m=+1300.423216116" watchObservedRunningTime="2025-11-21 20:28:15.255685603 +0000 UTC m=+1300.441870647" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.273846 4727 scope.go:117] "RemoveContainer" containerID="364f5a4a47bb2ca8f0327214f490df2d67e76c1d1aa242e30cd8041e08a9bad4" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.280568 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8476b67874-f2dtk"] Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.290366 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8476b67874-f2dtk"] Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.322492 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.328084 4727 scope.go:117] "RemoveContainer" containerID="b4816a197616bd3e75b9a4bbf9b7b1e6fc961138bccc0ce948bc961650827314" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.334288 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.359485 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.363413 4727 scope.go:117] "RemoveContainer" containerID="369215486f3b16df159fba213ea1cbd356f1fa2ee51bbe03cf576e2dcf446b6d" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.373282 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393055 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 21 20:28:15 crc kubenswrapper[4727]: E1121 20:28:15.393559 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e5ca35-9016-499e-908a-15ad18d11d04" containerName="cinder-api-log" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393574 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e5ca35-9016-499e-908a-15ad18d11d04" containerName="cinder-api-log" Nov 21 20:28:15 crc kubenswrapper[4727]: E1121 20:28:15.393593 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="proxy-httpd" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393601 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="proxy-httpd" Nov 21 20:28:15 crc kubenswrapper[4727]: E1121 20:28:15.393614 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d6995f-166c-410a-adee-3733a25c28df" containerName="neutron-api" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393620 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d6995f-166c-410a-adee-3733a25c28df" containerName="neutron-api" Nov 21 20:28:15 crc kubenswrapper[4727]: E1121 20:28:15.393633 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="ceilometer-central-agent" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393639 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="ceilometer-central-agent" Nov 21 20:28:15 crc kubenswrapper[4727]: E1121 20:28:15.393649 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="sg-core" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393654 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="sg-core" Nov 21 20:28:15 crc kubenswrapper[4727]: E1121 20:28:15.393666 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d6995f-166c-410a-adee-3733a25c28df" containerName="neutron-httpd" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393673 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d6995f-166c-410a-adee-3733a25c28df" containerName="neutron-httpd" Nov 21 20:28:15 crc kubenswrapper[4727]: E1121 20:28:15.393696 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e5ca35-9016-499e-908a-15ad18d11d04" containerName="cinder-api" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393702 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e5ca35-9016-499e-908a-15ad18d11d04" containerName="cinder-api" Nov 21 20:28:15 crc kubenswrapper[4727]: E1121 20:28:15.393718 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="ceilometer-notification-agent" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393724 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="ceilometer-notification-agent" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393918 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e5ca35-9016-499e-908a-15ad18d11d04" containerName="cinder-api" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393938 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d6995f-166c-410a-adee-3733a25c28df" containerName="neutron-api" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393950 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="sg-core" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393984 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d6995f-166c-410a-adee-3733a25c28df" containerName="neutron-httpd" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.393999 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="ceilometer-notification-agent" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.394016 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="ceilometer-central-agent" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.394030 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" containerName="proxy-httpd" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.394048 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e5ca35-9016-499e-908a-15ad18d11d04" containerName="cinder-api-log" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.395503 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.400868 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.403781 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.403827 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.416378 4727 scope.go:117] "RemoveContainer" containerID="401a7927f2971b874815c83b791587996a2c69b15c07e08d296581862a553320" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.429095 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.442464 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.446609 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.448850 4727 scope.go:117] "RemoveContainer" containerID="48ba325f33bfd261de135825539fe6159d5408213e28c733ddcdad1dcba3a1fa" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.452116 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.452652 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.485259 4727 scope.go:117] "RemoveContainer" containerID="a90fadad2fb926a3723ac0221e60db1ea2c37ff7d0dfd0bb5cffe015a81dc788" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502037 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502099 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-config-data\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502151 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-logs\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502184 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlds8\" (UniqueName: \"kubernetes.io/projected/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-kube-api-access-rlds8\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502207 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-scripts\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502242 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502304 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502383 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502432 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502520 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-config-data\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502607 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlbgc\" (UniqueName: \"kubernetes.io/projected/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-kube-api-access-xlbgc\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502643 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-log-httpd\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502662 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-config-data-custom\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.502856 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-scripts\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.503002 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-run-httpd\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.507187 4727 scope.go:117] "RemoveContainer" containerID="09df844fb72c99fbb281a5d5d2262f31e73fc33a9962897ebf4ad4ca2530c41b" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.515245 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e5ca35-9016-499e-908a-15ad18d11d04" path="/var/lib/kubelet/pods/20e5ca35-9016-499e-908a-15ad18d11d04/volumes" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.516069 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d6995f-166c-410a-adee-3733a25c28df" path="/var/lib/kubelet/pods/b4d6995f-166c-410a-adee-3733a25c28df/volumes" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.516797 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce1af82-ae9a-4eda-906c-e500a2727f27" path="/var/lib/kubelet/pods/bce1af82-ae9a-4eda-906c-e500a2727f27/volumes" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.518934 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.592359 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:15 crc kubenswrapper[4727]: E1121 20:28:15.593472 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-xlbgc log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="edb55688-0b4d-4c94-aa7c-d7d373a0ff45" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605255 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-run-httpd\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605318 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605394 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-config-data\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605425 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-logs\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605574 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlds8\" (UniqueName: \"kubernetes.io/projected/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-kube-api-access-rlds8\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605644 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-scripts\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605678 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605720 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605748 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605810 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.605921 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-config-data\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.606052 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlbgc\" (UniqueName: \"kubernetes.io/projected/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-kube-api-access-xlbgc\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.606090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-log-httpd\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.606132 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-config-data-custom\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.606396 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.606471 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-scripts\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.607697 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.608140 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-run-httpd\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.608601 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-log-httpd\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.609215 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-logs\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.615192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.615350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.615512 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-config-data-custom\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.615921 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.616194 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-scripts\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.616201 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.619557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-config-data\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.620701 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-scripts\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.622213 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-config-data\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.623822 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.631711 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlbgc\" (UniqueName: \"kubernetes.io/projected/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-kube-api-access-xlbgc\") pod \"ceilometer-0\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " pod="openstack/ceilometer-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.633503 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlds8\" (UniqueName: \"kubernetes.io/projected/515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6-kube-api-access-rlds8\") pod \"cinder-api-0\" (UID: \"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6\") " pod="openstack/cinder-api-0" Nov 21 20:28:15 crc kubenswrapper[4727]: I1121 20:28:15.741900 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.210656 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.260456 4727 generic.go:334] "Generic (PLEG): container finished" podID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" containerID="b07957761d5ecd2b6177d21677c95483a2b9b6b460eef9fa8fa7cdddf2eeecf9" exitCode=0 Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.260538 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7909040-500b-4a0d-878f-9a4c2d8b6a9b","Type":"ContainerDied","Data":"b07957761d5ecd2b6177d21677c95483a2b9b6b460eef9fa8fa7cdddf2eeecf9"} Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.286127 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-9c89d96c5-h6lxw" event={"ID":"d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8","Type":"ContainerStarted","Data":"9b04ca20cacc578b01b99c9d995e45dd2a0ffbeb3c361ce2ba564b12d2957d55"} Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.286276 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.325938 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-9c89d96c5-h6lxw" podStartSLOduration=8.325917811 podStartE2EDuration="8.325917811s" podCreationTimestamp="2025-11-21 20:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:16.317608171 +0000 UTC m=+1301.503793225" watchObservedRunningTime="2025-11-21 20:28:16.325917811 +0000 UTC m=+1301.512102855" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.405158 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.412998 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.526334 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-combined-ca-bundle\") pod \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.528473 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.528923 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-config-data\") pod \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529031 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-sg-core-conf-yaml\") pod \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529081 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-public-tls-certs\") pod \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529175 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdbd6\" (UniqueName: \"kubernetes.io/projected/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-kube-api-access-jdbd6\") pod \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529220 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-scripts\") pod \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529313 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-combined-ca-bundle\") pod \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529339 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-run-httpd\") pod \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529392 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-config-data\") pod \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529450 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-log-httpd\") pod \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529519 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-httpd-run\") pod \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529615 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-logs\") pod \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\" (UID: \"f7909040-500b-4a0d-878f-9a4c2d8b6a9b\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529647 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlbgc\" (UniqueName: \"kubernetes.io/projected/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-kube-api-access-xlbgc\") pod \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.529731 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-scripts\") pod \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\" (UID: \"edb55688-0b4d-4c94-aa7c-d7d373a0ff45\") " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.536157 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-scripts" (OuterVolumeSpecName: "scripts") pod "f7909040-500b-4a0d-878f-9a4c2d8b6a9b" (UID: "f7909040-500b-4a0d-878f-9a4c2d8b6a9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.536269 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-config-data" (OuterVolumeSpecName: "config-data") pod "edb55688-0b4d-4c94-aa7c-d7d373a0ff45" (UID: "edb55688-0b4d-4c94-aa7c-d7d373a0ff45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.537185 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "edb55688-0b4d-4c94-aa7c-d7d373a0ff45" (UID: "edb55688-0b4d-4c94-aa7c-d7d373a0ff45"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.537193 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "edb55688-0b4d-4c94-aa7c-d7d373a0ff45" (UID: "edb55688-0b4d-4c94-aa7c-d7d373a0ff45"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.537484 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "edb55688-0b4d-4c94-aa7c-d7d373a0ff45" (UID: "edb55688-0b4d-4c94-aa7c-d7d373a0ff45"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.537824 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f7909040-500b-4a0d-878f-9a4c2d8b6a9b" (UID: "f7909040-500b-4a0d-878f-9a4c2d8b6a9b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.538023 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-logs" (OuterVolumeSpecName: "logs") pod "f7909040-500b-4a0d-878f-9a4c2d8b6a9b" (UID: "f7909040-500b-4a0d-878f-9a4c2d8b6a9b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.538146 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "f7909040-500b-4a0d-878f-9a4c2d8b6a9b" (UID: "f7909040-500b-4a0d-878f-9a4c2d8b6a9b"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.538921 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.538951 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.538992 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.539007 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.539017 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.539030 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.539060 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.539072 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.544789 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-scripts" (OuterVolumeSpecName: "scripts") pod "edb55688-0b4d-4c94-aa7c-d7d373a0ff45" (UID: "edb55688-0b4d-4c94-aa7c-d7d373a0ff45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.555070 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-kube-api-access-xlbgc" (OuterVolumeSpecName: "kube-api-access-xlbgc") pod "edb55688-0b4d-4c94-aa7c-d7d373a0ff45" (UID: "edb55688-0b4d-4c94-aa7c-d7d373a0ff45"). InnerVolumeSpecName "kube-api-access-xlbgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.555168 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-kube-api-access-jdbd6" (OuterVolumeSpecName: "kube-api-access-jdbd6") pod "f7909040-500b-4a0d-878f-9a4c2d8b6a9b" (UID: "f7909040-500b-4a0d-878f-9a4c2d8b6a9b"). InnerVolumeSpecName "kube-api-access-jdbd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.585908 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7909040-500b-4a0d-878f-9a4c2d8b6a9b" (UID: "f7909040-500b-4a0d-878f-9a4c2d8b6a9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.586711 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edb55688-0b4d-4c94-aa7c-d7d373a0ff45" (UID: "edb55688-0b4d-4c94-aa7c-d7d373a0ff45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.592647 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.608141 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-config-data" (OuterVolumeSpecName: "config-data") pod "f7909040-500b-4a0d-878f-9a4c2d8b6a9b" (UID: "f7909040-500b-4a0d-878f-9a4c2d8b6a9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.634030 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f7909040-500b-4a0d-878f-9a4c2d8b6a9b" (UID: "f7909040-500b-4a0d-878f-9a4c2d8b6a9b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.641306 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.641337 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdbd6\" (UniqueName: \"kubernetes.io/projected/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-kube-api-access-jdbd6\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.641349 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.641361 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7909040-500b-4a0d-878f-9a4c2d8b6a9b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.641369 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlbgc\" (UniqueName: \"kubernetes.io/projected/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-kube-api-access-xlbgc\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.641377 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.641386 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb55688-0b4d-4c94-aa7c-d7d373a0ff45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:16 crc kubenswrapper[4727]: I1121 20:28:16.641394 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.304118 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6","Type":"ContainerStarted","Data":"a922dec8523f3bf2be3b6905b687920b49b3c26799207cf79e9ae17dd30fce8d"} Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.304408 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6","Type":"ContainerStarted","Data":"22f1246c010ddca67d15ec93bf84d9bb4d6a1a2cb640af65b476d1f0a5e47718"} Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.310167 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.310210 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.310182 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7909040-500b-4a0d-878f-9a4c2d8b6a9b","Type":"ContainerDied","Data":"f51de53b553b7d36d2741250e5bf65e2f891c498849e7fe820f62447e36bc90b"} Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.310260 4727 scope.go:117] "RemoveContainer" containerID="b07957761d5ecd2b6177d21677c95483a2b9b6b460eef9fa8fa7cdddf2eeecf9" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.312315 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.312344 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.364449 4727 scope.go:117] "RemoveContainer" containerID="7bfec23f6aa8aebd03ea88d77fdaa02c68e0dfb290cd68fc9d15e404684d0cc4" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.418082 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.433608 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.443542 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.451724 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.462226 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:17 crc kubenswrapper[4727]: E1121 20:28:17.463017 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" containerName="glance-log" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.463048 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" containerName="glance-log" Nov 21 20:28:17 crc kubenswrapper[4727]: E1121 20:28:17.463099 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" containerName="glance-httpd" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.463109 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" containerName="glance-httpd" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.463519 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" containerName="glance-log" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.463566 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" containerName="glance-httpd" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.470322 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.470473 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.474371 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.474386 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.477459 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.483455 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.484549 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.489837 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.489947 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.534580 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb55688-0b4d-4c94-aa7c-d7d373a0ff45" path="/var/lib/kubelet/pods/edb55688-0b4d-4c94-aa7c-d7d373a0ff45/volumes" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.535133 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7909040-500b-4a0d-878f-9a4c2d8b6a9b" path="/var/lib/kubelet/pods/f7909040-500b-4a0d-878f-9a4c2d8b6a9b/volumes" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.578500 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.578570 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.578607 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-log-httpd\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.578634 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cf48\" (UniqueName: \"kubernetes.io/projected/413b8645-2f23-45ae-803d-bc0c140ad29f-kube-api-access-8cf48\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.578762 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.578797 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-run-httpd\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.578862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.579040 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.579066 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-scripts\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.579097 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-logs\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.579130 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-config-data\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.579159 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.579191 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.579225 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qwmb\" (UniqueName: \"kubernetes.io/projected/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-kube-api-access-2qwmb\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.579250 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683431 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683556 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-scripts\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683601 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-logs\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683643 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-config-data\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683675 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683722 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683745 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683763 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683793 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qwmb\" (UniqueName: \"kubernetes.io/projected/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-kube-api-access-2qwmb\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683866 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.683936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.684000 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-log-httpd\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.684029 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cf48\" (UniqueName: \"kubernetes.io/projected/413b8645-2f23-45ae-803d-bc0c140ad29f-kube-api-access-8cf48\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.684140 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-run-httpd\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.684162 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.684260 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.685218 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-log-httpd\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.686224 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-run-httpd\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.686504 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-logs\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.686696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.693515 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-scripts\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.693988 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-config-data\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.704929 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.705038 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qwmb\" (UniqueName: \"kubernetes.io/projected/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-kube-api-access-2qwmb\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.705522 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.705543 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.707327 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.709295 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.709427 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbaf0e5f-5f38-406f-84d8-eb76acf9727d-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.716826 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cf48\" (UniqueName: \"kubernetes.io/projected/413b8645-2f23-45ae-803d-bc0c140ad29f-kube-api-access-8cf48\") pod \"ceilometer-0\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.741148 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"bbaf0e5f-5f38-406f-84d8-eb76acf9727d\") " pod="openstack/glance-default-external-api-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.803769 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:17 crc kubenswrapper[4727]: I1121 20:28:17.816649 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.165675 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="407cfc42-937a-4e61-b257-042675020db0" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.189:9292/healthcheck\": read tcp 10.217.0.2:45808->10.217.0.189:9292: read: connection reset by peer" Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.166112 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="407cfc42-937a-4e61-b257-042675020db0" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.189:9292/healthcheck\": read tcp 10.217.0.2:45810->10.217.0.189:9292: read: connection reset by peer" Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.361936 4727 generic.go:334] "Generic (PLEG): container finished" podID="407cfc42-937a-4e61-b257-042675020db0" containerID="5e540f8a1538066950326c07069aa0bf770f19fd10b217f424b71c2982dd7bfb" exitCode=0 Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.362068 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"407cfc42-937a-4e61-b257-042675020db0","Type":"ContainerDied","Data":"5e540f8a1538066950326c07069aa0bf770f19fd10b217f424b71c2982dd7bfb"} Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.378065 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6","Type":"ContainerStarted","Data":"bc51e9ee54b4016ba95d329a53705ab6109a369025f42eefb62c6a3b12b12224"} Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.379219 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.408926 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.408904506 podStartE2EDuration="3.408904506s" podCreationTimestamp="2025-11-21 20:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:18.400425191 +0000 UTC m=+1303.586610235" watchObservedRunningTime="2025-11-21 20:28:18.408904506 +0000 UTC m=+1303.595089560" Nov 21 20:28:18 crc kubenswrapper[4727]: W1121 20:28:18.461164 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod413b8645_2f23_45ae_803d_bc0c140ad29f.slice/crio-87c56eb050a62f6bb9c9b9161a52a598b11a8728ae216a15df516429d8868100 WatchSource:0}: Error finding container 87c56eb050a62f6bb9c9b9161a52a598b11a8728ae216a15df516429d8868100: Status 404 returned error can't find the container with id 87c56eb050a62f6bb9c9b9161a52a598b11a8728ae216a15df516429d8868100 Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.466926 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.544096 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.823751 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.930400 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-httpd-run\") pod \"407cfc42-937a-4e61-b257-042675020db0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.930441 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-config-data\") pod \"407cfc42-937a-4e61-b257-042675020db0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.930476 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-scripts\") pod \"407cfc42-937a-4e61-b257-042675020db0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.930515 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-combined-ca-bundle\") pod \"407cfc42-937a-4e61-b257-042675020db0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.930555 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-internal-tls-certs\") pod \"407cfc42-937a-4e61-b257-042675020db0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.930630 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-logs\") pod \"407cfc42-937a-4e61-b257-042675020db0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.930670 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh5gt\" (UniqueName: \"kubernetes.io/projected/407cfc42-937a-4e61-b257-042675020db0-kube-api-access-bh5gt\") pod \"407cfc42-937a-4e61-b257-042675020db0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.931072 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"407cfc42-937a-4e61-b257-042675020db0\" (UID: \"407cfc42-937a-4e61-b257-042675020db0\") " Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.936902 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "407cfc42-937a-4e61-b257-042675020db0" (UID: "407cfc42-937a-4e61-b257-042675020db0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.938033 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-logs" (OuterVolumeSpecName: "logs") pod "407cfc42-937a-4e61-b257-042675020db0" (UID: "407cfc42-937a-4e61-b257-042675020db0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.959075 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "407cfc42-937a-4e61-b257-042675020db0" (UID: "407cfc42-937a-4e61-b257-042675020db0"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.963591 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-scripts" (OuterVolumeSpecName: "scripts") pod "407cfc42-937a-4e61-b257-042675020db0" (UID: "407cfc42-937a-4e61-b257-042675020db0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.975256 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407cfc42-937a-4e61-b257-042675020db0-kube-api-access-bh5gt" (OuterVolumeSpecName: "kube-api-access-bh5gt") pod "407cfc42-937a-4e61-b257-042675020db0" (UID: "407cfc42-937a-4e61-b257-042675020db0"). InnerVolumeSpecName "kube-api-access-bh5gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:18 crc kubenswrapper[4727]: I1121 20:28:18.986233 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "407cfc42-937a-4e61-b257-042675020db0" (UID: "407cfc42-937a-4e61-b257-042675020db0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.014181 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-config-data" (OuterVolumeSpecName: "config-data") pod "407cfc42-937a-4e61-b257-042675020db0" (UID: "407cfc42-937a-4e61-b257-042675020db0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.027598 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "407cfc42-937a-4e61-b257-042675020db0" (UID: "407cfc42-937a-4e61-b257-042675020db0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.039527 4727 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.039556 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.039593 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.039604 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.039616 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/407cfc42-937a-4e61-b257-042675020db0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.039629 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/407cfc42-937a-4e61-b257-042675020db0-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.039638 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh5gt\" (UniqueName: \"kubernetes.io/projected/407cfc42-937a-4e61-b257-042675020db0-kube-api-access-bh5gt\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.039696 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.114904 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.142416 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.391664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"407cfc42-937a-4e61-b257-042675020db0","Type":"ContainerDied","Data":"cc140ac5fa13e63ad5f9c64b07ff0df42ff769c15f28d1ff3a3fd2c5a42c2a7b"} Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.391732 4727 scope.go:117] "RemoveContainer" containerID="5e540f8a1538066950326c07069aa0bf770f19fd10b217f424b71c2982dd7bfb" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.391684 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.393452 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbaf0e5f-5f38-406f-84d8-eb76acf9727d","Type":"ContainerStarted","Data":"90a72bd87ed23710f9878650b13f78f6412db5b2e7386f72bc1020f367dc7999"} Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.396271 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"413b8645-2f23-45ae-803d-bc0c140ad29f","Type":"ContainerStarted","Data":"87c56eb050a62f6bb9c9b9161a52a598b11a8728ae216a15df516429d8868100"} Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.430051 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.448194 4727 scope.go:117] "RemoveContainer" containerID="6d9f218a85ef4f630e3c8e515cb83e57b79860124f0dea59b6d3cbf362fe1094" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.466265 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.495852 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:28:19 crc kubenswrapper[4727]: E1121 20:28:19.496452 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407cfc42-937a-4e61-b257-042675020db0" containerName="glance-httpd" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.496479 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="407cfc42-937a-4e61-b257-042675020db0" containerName="glance-httpd" Nov 21 20:28:19 crc kubenswrapper[4727]: E1121 20:28:19.496491 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407cfc42-937a-4e61-b257-042675020db0" containerName="glance-log" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.496500 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="407cfc42-937a-4e61-b257-042675020db0" containerName="glance-log" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.496779 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="407cfc42-937a-4e61-b257-042675020db0" containerName="glance-httpd" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.496814 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="407cfc42-937a-4e61-b257-042675020db0" containerName="glance-log" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.498489 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.504216 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.504502 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.520017 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407cfc42-937a-4e61-b257-042675020db0" path="/var/lib/kubelet/pods/407cfc42-937a-4e61-b257-042675020db0/volumes" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.544555 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.656169 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drph7\" (UniqueName: \"kubernetes.io/projected/242aec5e-01e3-4559-9031-40a5c69c5f0a-kube-api-access-drph7\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.656553 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.656625 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.656669 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/242aec5e-01e3-4559-9031-40a5c69c5f0a-logs\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.656693 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.656751 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.656774 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.656830 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/242aec5e-01e3-4559-9031-40a5c69c5f0a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.758724 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.758807 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/242aec5e-01e3-4559-9031-40a5c69c5f0a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.758858 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drph7\" (UniqueName: \"kubernetes.io/projected/242aec5e-01e3-4559-9031-40a5c69c5f0a-kube-api-access-drph7\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.759022 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.759092 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.759136 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/242aec5e-01e3-4559-9031-40a5c69c5f0a-logs\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.759168 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.759230 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.759390 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/242aec5e-01e3-4559-9031-40a5c69c5f0a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.759729 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.759760 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/242aec5e-01e3-4559-9031-40a5c69c5f0a-logs\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.763614 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.765607 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.770175 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.771578 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/242aec5e-01e3-4559-9031-40a5c69c5f0a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.777844 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drph7\" (UniqueName: \"kubernetes.io/projected/242aec5e-01e3-4559-9031-40a5c69c5f0a-kube-api-access-drph7\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.811320 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"242aec5e-01e3-4559-9031-40a5c69c5f0a\") " pod="openstack/glance-default-internal-api-0" Nov 21 20:28:19 crc kubenswrapper[4727]: I1121 20:28:19.838919 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.459980 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbaf0e5f-5f38-406f-84d8-eb76acf9727d","Type":"ContainerStarted","Data":"e2e5574cffaf922a92369c45b669d8486e04b70ba5f2078b9c604a27c1020ba9"} Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.505071 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"413b8645-2f23-45ae-803d-bc0c140ad29f","Type":"ContainerStarted","Data":"57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23"} Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.524501 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6d6ff87d5b-5qckr"] Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.526458 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.536427 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.536662 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-c96nh" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.536769 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.558014 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d6ff87d5b-5qckr"] Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.613109 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g49lf"] Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.620808 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.661974 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-combined-ca-bundle\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.662095 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4v66\" (UniqueName: \"kubernetes.io/projected/d85cee99-5eae-4395-b2a7-733aa041212f-kube-api-access-v4v66\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.662146 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data-custom\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.662224 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.726487 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g49lf"] Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.760176 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-66b5cc9484-l2wsd"] Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.762154 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.764427 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data-custom\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.764519 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.764572 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-config\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.764601 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-combined-ca-bundle\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.764653 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.764681 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.764704 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfhd7\" (UniqueName: \"kubernetes.io/projected/4da2f3f3-a035-4d39-8756-6eab26cef3dc-kube-api-access-hfhd7\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.764726 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.764763 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4v66\" (UniqueName: \"kubernetes.io/projected/d85cee99-5eae-4395-b2a7-733aa041212f-kube-api-access-v4v66\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.764782 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.769604 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.780082 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.784526 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data-custom\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.805194 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-combined-ca-bundle\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.805265 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6c5d5d79b4-sdspk"] Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.849177 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.852807 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4v66\" (UniqueName: \"kubernetes.io/projected/d85cee99-5eae-4395-b2a7-733aa041212f-kube-api-access-v4v66\") pod \"heat-engine-6d6ff87d5b-5qckr\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.855551 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.864156 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66b5cc9484-l2wsd"] Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.908343 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-combined-ca-bundle\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.908450 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.908522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.908564 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfhd7\" (UniqueName: \"kubernetes.io/projected/4da2f3f3-a035-4d39-8756-6eab26cef3dc-kube-api-access-hfhd7\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.908606 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.908715 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.908804 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data-custom\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.938040 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.940133 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.941149 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.947571 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.952353 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw98b\" (UniqueName: \"kubernetes.io/projected/5f22c01e-e191-42ab-8e0e-f678abc961b2-kube-api-access-dw98b\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.952833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.953005 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-config\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.955401 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.958007 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-config\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:20 crc kubenswrapper[4727]: I1121 20:28:20.978877 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c5d5d79b4-sdspk"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.016246 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfhd7\" (UniqueName: \"kubernetes.io/projected/4da2f3f3-a035-4d39-8756-6eab26cef3dc-kube-api-access-hfhd7\") pod \"dnsmasq-dns-688b9f5b49-g49lf\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.024455 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.063406 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-combined-ca-bundle\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.063646 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data-custom\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.063793 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.063986 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw98b\" (UniqueName: \"kubernetes.io/projected/5f22c01e-e191-42ab-8e0e-f678abc961b2-kube-api-access-dw98b\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.064039 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plmlh\" (UniqueName: \"kubernetes.io/projected/2b697061-39b3-440d-bc9c-e31848742e7d-kube-api-access-plmlh\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.064097 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data-custom\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.064133 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.064322 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-combined-ca-bundle\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.069047 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-combined-ca-bundle\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.081054 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.089331 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data-custom\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.102647 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw98b\" (UniqueName: \"kubernetes.io/projected/5f22c01e-e191-42ab-8e0e-f678abc961b2-kube-api-access-dw98b\") pod \"heat-cfnapi-66b5cc9484-l2wsd\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.108477 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xfpm2"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.123900 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xfpm2"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.124071 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xfpm2" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.152013 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xhxc8"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.153480 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xhxc8" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.169187 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data-custom\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.169330 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-combined-ca-bundle\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.169417 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.169454 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plmlh\" (UniqueName: \"kubernetes.io/projected/2b697061-39b3-440d-bc9c-e31848742e7d-kube-api-access-plmlh\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.174786 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xhxc8"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.176737 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data-custom\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.178534 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-combined-ca-bundle\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.187071 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.193688 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plmlh\" (UniqueName: \"kubernetes.io/projected/2b697061-39b3-440d-bc9c-e31848742e7d-kube-api-access-plmlh\") pod \"heat-api-6c5d5d79b4-sdspk\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.200928 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-26lpd"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.209655 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-26lpd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.242393 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ae35-account-create-tdvf7"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.244028 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae35-account-create-tdvf7" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.249770 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.272503 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4cqb\" (UniqueName: \"kubernetes.io/projected/436b53f0-6ba8-42e8-89eb-f853b1308cbf-kube-api-access-s4cqb\") pod \"nova-api-db-create-xfpm2\" (UID: \"436b53f0-6ba8-42e8-89eb-f853b1308cbf\") " pod="openstack/nova-api-db-create-xfpm2" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.272551 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8183461-41a5-4d09-aeef-f3d431e6d71b-operator-scripts\") pod \"nova-cell0-db-create-xhxc8\" (UID: \"f8183461-41a5-4d09-aeef-f3d431e6d71b\") " pod="openstack/nova-cell0-db-create-xhxc8" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.272669 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/436b53f0-6ba8-42e8-89eb-f853b1308cbf-operator-scripts\") pod \"nova-api-db-create-xfpm2\" (UID: \"436b53f0-6ba8-42e8-89eb-f853b1308cbf\") " pod="openstack/nova-api-db-create-xfpm2" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.272874 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qv2h\" (UniqueName: \"kubernetes.io/projected/f8183461-41a5-4d09-aeef-f3d431e6d71b-kube-api-access-6qv2h\") pod \"nova-cell0-db-create-xhxc8\" (UID: \"f8183461-41a5-4d09-aeef-f3d431e6d71b\") " pod="openstack/nova-cell0-db-create-xhxc8" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.278330 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-26lpd"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.281575 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.291923 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.299368 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ae35-account-create-tdvf7"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.375054 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qv2h\" (UniqueName: \"kubernetes.io/projected/f8183461-41a5-4d09-aeef-f3d431e6d71b-kube-api-access-6qv2h\") pod \"nova-cell0-db-create-xhxc8\" (UID: \"f8183461-41a5-4d09-aeef-f3d431e6d71b\") " pod="openstack/nova-cell0-db-create-xhxc8" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.375125 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4cqb\" (UniqueName: \"kubernetes.io/projected/436b53f0-6ba8-42e8-89eb-f853b1308cbf-kube-api-access-s4cqb\") pod \"nova-api-db-create-xfpm2\" (UID: \"436b53f0-6ba8-42e8-89eb-f853b1308cbf\") " pod="openstack/nova-api-db-create-xfpm2" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.375151 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8183461-41a5-4d09-aeef-f3d431e6d71b-operator-scripts\") pod \"nova-cell0-db-create-xhxc8\" (UID: \"f8183461-41a5-4d09-aeef-f3d431e6d71b\") " pod="openstack/nova-cell0-db-create-xhxc8" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.375218 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21368668-5984-4a3f-915a-06c8a959fefd-operator-scripts\") pod \"nova-cell1-db-create-26lpd\" (UID: \"21368668-5984-4a3f-915a-06c8a959fefd\") " pod="openstack/nova-cell1-db-create-26lpd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.375277 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvps\" (UniqueName: \"kubernetes.io/projected/21368668-5984-4a3f-915a-06c8a959fefd-kube-api-access-9fvps\") pod \"nova-cell1-db-create-26lpd\" (UID: \"21368668-5984-4a3f-915a-06c8a959fefd\") " pod="openstack/nova-cell1-db-create-26lpd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.375311 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/436b53f0-6ba8-42e8-89eb-f853b1308cbf-operator-scripts\") pod \"nova-api-db-create-xfpm2\" (UID: \"436b53f0-6ba8-42e8-89eb-f853b1308cbf\") " pod="openstack/nova-api-db-create-xfpm2" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.375382 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0422bc7-ea04-4d95-b240-608a1c0d16ec-operator-scripts\") pod \"nova-api-ae35-account-create-tdvf7\" (UID: \"f0422bc7-ea04-4d95-b240-608a1c0d16ec\") " pod="openstack/nova-api-ae35-account-create-tdvf7" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.375484 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2czdf\" (UniqueName: \"kubernetes.io/projected/f0422bc7-ea04-4d95-b240-608a1c0d16ec-kube-api-access-2czdf\") pod \"nova-api-ae35-account-create-tdvf7\" (UID: \"f0422bc7-ea04-4d95-b240-608a1c0d16ec\") " pod="openstack/nova-api-ae35-account-create-tdvf7" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.383634 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/436b53f0-6ba8-42e8-89eb-f853b1308cbf-operator-scripts\") pod \"nova-api-db-create-xfpm2\" (UID: \"436b53f0-6ba8-42e8-89eb-f853b1308cbf\") " pod="openstack/nova-api-db-create-xfpm2" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.383738 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8183461-41a5-4d09-aeef-f3d431e6d71b-operator-scripts\") pod \"nova-cell0-db-create-xhxc8\" (UID: \"f8183461-41a5-4d09-aeef-f3d431e6d71b\") " pod="openstack/nova-cell0-db-create-xhxc8" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.389568 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.397328 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4cqb\" (UniqueName: \"kubernetes.io/projected/436b53f0-6ba8-42e8-89eb-f853b1308cbf-kube-api-access-s4cqb\") pod \"nova-api-db-create-xfpm2\" (UID: \"436b53f0-6ba8-42e8-89eb-f853b1308cbf\") " pod="openstack/nova-api-db-create-xfpm2" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.406788 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qv2h\" (UniqueName: \"kubernetes.io/projected/f8183461-41a5-4d09-aeef-f3d431e6d71b-kube-api-access-6qv2h\") pod \"nova-cell0-db-create-xhxc8\" (UID: \"f8183461-41a5-4d09-aeef-f3d431e6d71b\") " pod="openstack/nova-cell0-db-create-xhxc8" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.477158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21368668-5984-4a3f-915a-06c8a959fefd-operator-scripts\") pod \"nova-cell1-db-create-26lpd\" (UID: \"21368668-5984-4a3f-915a-06c8a959fefd\") " pod="openstack/nova-cell1-db-create-26lpd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.477231 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvps\" (UniqueName: \"kubernetes.io/projected/21368668-5984-4a3f-915a-06c8a959fefd-kube-api-access-9fvps\") pod \"nova-cell1-db-create-26lpd\" (UID: \"21368668-5984-4a3f-915a-06c8a959fefd\") " pod="openstack/nova-cell1-db-create-26lpd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.477295 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0422bc7-ea04-4d95-b240-608a1c0d16ec-operator-scripts\") pod \"nova-api-ae35-account-create-tdvf7\" (UID: \"f0422bc7-ea04-4d95-b240-608a1c0d16ec\") " pod="openstack/nova-api-ae35-account-create-tdvf7" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.477366 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2czdf\" (UniqueName: \"kubernetes.io/projected/f0422bc7-ea04-4d95-b240-608a1c0d16ec-kube-api-access-2czdf\") pod \"nova-api-ae35-account-create-tdvf7\" (UID: \"f0422bc7-ea04-4d95-b240-608a1c0d16ec\") " pod="openstack/nova-api-ae35-account-create-tdvf7" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.478362 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21368668-5984-4a3f-915a-06c8a959fefd-operator-scripts\") pod \"nova-cell1-db-create-26lpd\" (UID: \"21368668-5984-4a3f-915a-06c8a959fefd\") " pod="openstack/nova-cell1-db-create-26lpd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.478436 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0422bc7-ea04-4d95-b240-608a1c0d16ec-operator-scripts\") pod \"nova-api-ae35-account-create-tdvf7\" (UID: \"f0422bc7-ea04-4d95-b240-608a1c0d16ec\") " pod="openstack/nova-api-ae35-account-create-tdvf7" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.499997 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2czdf\" (UniqueName: \"kubernetes.io/projected/f0422bc7-ea04-4d95-b240-608a1c0d16ec-kube-api-access-2czdf\") pod \"nova-api-ae35-account-create-tdvf7\" (UID: \"f0422bc7-ea04-4d95-b240-608a1c0d16ec\") " pod="openstack/nova-api-ae35-account-create-tdvf7" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.500055 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2ff4-account-create-ds2tt"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.501799 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2ff4-account-create-ds2tt" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.505298 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.530383 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xfpm2" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.535740 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvps\" (UniqueName: \"kubernetes.io/projected/21368668-5984-4a3f-915a-06c8a959fefd-kube-api-access-9fvps\") pod \"nova-cell1-db-create-26lpd\" (UID: \"21368668-5984-4a3f-915a-06c8a959fefd\") " pod="openstack/nova-cell1-db-create-26lpd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.561803 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xhxc8" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.570205 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2ff4-account-create-ds2tt"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.594787 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-26lpd" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.607111 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae35-account-create-tdvf7" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.616109 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"242aec5e-01e3-4559-9031-40a5c69c5f0a","Type":"ContainerStarted","Data":"e270757cfe4f67b2699989c9e74671234689555a8946dae19b97ed797178819d"} Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.631249 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbaf0e5f-5f38-406f-84d8-eb76acf9727d","Type":"ContainerStarted","Data":"c746c910b14bb54844ed5edd7ae6e3d01a8281e385d9ebfff9c518bdc1b859c2"} Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.642044 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-005d-account-create-g2f2m"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.644298 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-005d-account-create-g2f2m" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.655401 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.684712 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e425f9-b08d-412d-9491-f1ffe7e5d54f-operator-scripts\") pod \"nova-cell0-2ff4-account-create-ds2tt\" (UID: \"57e425f9-b08d-412d-9491-f1ffe7e5d54f\") " pod="openstack/nova-cell0-2ff4-account-create-ds2tt" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.684816 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjtc\" (UniqueName: \"kubernetes.io/projected/57e425f9-b08d-412d-9491-f1ffe7e5d54f-kube-api-access-kbjtc\") pod \"nova-cell0-2ff4-account-create-ds2tt\" (UID: \"57e425f9-b08d-412d-9491-f1ffe7e5d54f\") " pod="openstack/nova-cell0-2ff4-account-create-ds2tt" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.709561 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-005d-account-create-g2f2m"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.710560 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.71054283 podStartE2EDuration="4.71054283s" podCreationTimestamp="2025-11-21 20:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:21.672438588 +0000 UTC m=+1306.858623632" watchObservedRunningTime="2025-11-21 20:28:21.71054283 +0000 UTC m=+1306.896727874" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.754400 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d6ff87d5b-5qckr"] Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.790487 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75bcn\" (UniqueName: \"kubernetes.io/projected/93d541d6-cea3-49c4-9f5c-d0484de3fafb-kube-api-access-75bcn\") pod \"nova-cell1-005d-account-create-g2f2m\" (UID: \"93d541d6-cea3-49c4-9f5c-d0484de3fafb\") " pod="openstack/nova-cell1-005d-account-create-g2f2m" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.790581 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e425f9-b08d-412d-9491-f1ffe7e5d54f-operator-scripts\") pod \"nova-cell0-2ff4-account-create-ds2tt\" (UID: \"57e425f9-b08d-412d-9491-f1ffe7e5d54f\") " pod="openstack/nova-cell0-2ff4-account-create-ds2tt" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.790606 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d541d6-cea3-49c4-9f5c-d0484de3fafb-operator-scripts\") pod \"nova-cell1-005d-account-create-g2f2m\" (UID: \"93d541d6-cea3-49c4-9f5c-d0484de3fafb\") " pod="openstack/nova-cell1-005d-account-create-g2f2m" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.790647 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbjtc\" (UniqueName: \"kubernetes.io/projected/57e425f9-b08d-412d-9491-f1ffe7e5d54f-kube-api-access-kbjtc\") pod \"nova-cell0-2ff4-account-create-ds2tt\" (UID: \"57e425f9-b08d-412d-9491-f1ffe7e5d54f\") " pod="openstack/nova-cell0-2ff4-account-create-ds2tt" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.791873 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e425f9-b08d-412d-9491-f1ffe7e5d54f-operator-scripts\") pod \"nova-cell0-2ff4-account-create-ds2tt\" (UID: \"57e425f9-b08d-412d-9491-f1ffe7e5d54f\") " pod="openstack/nova-cell0-2ff4-account-create-ds2tt" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.810853 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbjtc\" (UniqueName: \"kubernetes.io/projected/57e425f9-b08d-412d-9491-f1ffe7e5d54f-kube-api-access-kbjtc\") pod \"nova-cell0-2ff4-account-create-ds2tt\" (UID: \"57e425f9-b08d-412d-9491-f1ffe7e5d54f\") " pod="openstack/nova-cell0-2ff4-account-create-ds2tt" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.841352 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2ff4-account-create-ds2tt" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.903002 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75bcn\" (UniqueName: \"kubernetes.io/projected/93d541d6-cea3-49c4-9f5c-d0484de3fafb-kube-api-access-75bcn\") pod \"nova-cell1-005d-account-create-g2f2m\" (UID: \"93d541d6-cea3-49c4-9f5c-d0484de3fafb\") " pod="openstack/nova-cell1-005d-account-create-g2f2m" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.903216 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d541d6-cea3-49c4-9f5c-d0484de3fafb-operator-scripts\") pod \"nova-cell1-005d-account-create-g2f2m\" (UID: \"93d541d6-cea3-49c4-9f5c-d0484de3fafb\") " pod="openstack/nova-cell1-005d-account-create-g2f2m" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.904676 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d541d6-cea3-49c4-9f5c-d0484de3fafb-operator-scripts\") pod \"nova-cell1-005d-account-create-g2f2m\" (UID: \"93d541d6-cea3-49c4-9f5c-d0484de3fafb\") " pod="openstack/nova-cell1-005d-account-create-g2f2m" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.950050 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75bcn\" (UniqueName: \"kubernetes.io/projected/93d541d6-cea3-49c4-9f5c-d0484de3fafb-kube-api-access-75bcn\") pod \"nova-cell1-005d-account-create-g2f2m\" (UID: \"93d541d6-cea3-49c4-9f5c-d0484de3fafb\") " pod="openstack/nova-cell1-005d-account-create-g2f2m" Nov 21 20:28:21 crc kubenswrapper[4727]: I1121 20:28:21.989548 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-005d-account-create-g2f2m" Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.082573 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g49lf"] Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.343486 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66b5cc9484-l2wsd"] Nov 21 20:28:22 crc kubenswrapper[4727]: W1121 20:28:22.508259 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f22c01e_e191_42ab_8e0e_f678abc961b2.slice/crio-f39998150717c1e79134ea75723e0ee569dea797b5a524973b84543541870714 WatchSource:0}: Error finding container f39998150717c1e79134ea75723e0ee569dea797b5a524973b84543541870714: Status 404 returned error can't find the container with id f39998150717c1e79134ea75723e0ee569dea797b5a524973b84543541870714 Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.617788 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.698167 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d6ff87d5b-5qckr" event={"ID":"d85cee99-5eae-4395-b2a7-733aa041212f","Type":"ContainerStarted","Data":"d5fc8a3bf4c44deaa2172e31e0de9eedd3c94b4a91d6d07922fea7a09b302371"} Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.698420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d6ff87d5b-5qckr" event={"ID":"d85cee99-5eae-4395-b2a7-733aa041212f","Type":"ContainerStarted","Data":"f265d32556b97cf944aced070ade8d94a2f821aaa7d9c21beb91b5560edd1b43"} Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.698535 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.703897 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"242aec5e-01e3-4559-9031-40a5c69c5f0a","Type":"ContainerStarted","Data":"a70a2ea1d4df3b35074e576e123a6b5130089783d5816b60eaf0c86575a8f3fa"} Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.713779 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" event={"ID":"5f22c01e-e191-42ab-8e0e-f678abc961b2","Type":"ContainerStarted","Data":"f39998150717c1e79134ea75723e0ee569dea797b5a524973b84543541870714"} Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.731326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"413b8645-2f23-45ae-803d-bc0c140ad29f","Type":"ContainerStarted","Data":"a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0"} Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.732333 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-26lpd"] Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.741337 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" event={"ID":"4da2f3f3-a035-4d39-8756-6eab26cef3dc","Type":"ContainerStarted","Data":"166f7647111e2e5306a02b3fff884c3b1911d1ef3a4404afae62f1faee0f97f9"} Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.756619 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c5d5d79b4-sdspk"] Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.764494 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6d6ff87d5b-5qckr" podStartSLOduration=2.764472924 podStartE2EDuration="2.764472924s" podCreationTimestamp="2025-11-21 20:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:22.741294833 +0000 UTC m=+1307.927479877" watchObservedRunningTime="2025-11-21 20:28:22.764472924 +0000 UTC m=+1307.950657968" Nov 21 20:28:22 crc kubenswrapper[4727]: W1121 20:28:22.812533 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b697061_39b3_440d_bc9c_e31848742e7d.slice/crio-bfecfc8b02461c02300cb6da1daf5ea10a63b5094db50fedbae607faafd6bb12 WatchSource:0}: Error finding container bfecfc8b02461c02300cb6da1daf5ea10a63b5094db50fedbae607faafd6bb12: Status 404 returned error can't find the container with id bfecfc8b02461c02300cb6da1daf5ea10a63b5094db50fedbae607faafd6bb12 Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.913643 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xhxc8"] Nov 21 20:28:22 crc kubenswrapper[4727]: I1121 20:28:22.942988 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xfpm2"] Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.431472 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2ff4-account-create-ds2tt"] Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.516466 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ae35-account-create-tdvf7"] Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.530513 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-005d-account-create-g2f2m"] Nov 21 20:28:23 crc kubenswrapper[4727]: W1121 20:28:23.593735 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0422bc7_ea04_4d95_b240_608a1c0d16ec.slice/crio-c4792424498a4d7c12ad975a1ffa23e0381046c0099a9b4e893bc51e1bbb9c51 WatchSource:0}: Error finding container c4792424498a4d7c12ad975a1ffa23e0381046c0099a9b4e893bc51e1bbb9c51: Status 404 returned error can't find the container with id c4792424498a4d7c12ad975a1ffa23e0381046c0099a9b4e893bc51e1bbb9c51 Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.770878 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"413b8645-2f23-45ae-803d-bc0c140ad29f","Type":"ContainerStarted","Data":"4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.778599 4727 generic.go:334] "Generic (PLEG): container finished" podID="21368668-5984-4a3f-915a-06c8a959fefd" containerID="e49e95a516dccf0f9c8f9b3e98beb68ef4576e3f760b3d2d7822fb9a6aef4062" exitCode=0 Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.778671 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-26lpd" event={"ID":"21368668-5984-4a3f-915a-06c8a959fefd","Type":"ContainerDied","Data":"e49e95a516dccf0f9c8f9b3e98beb68ef4576e3f760b3d2d7822fb9a6aef4062"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.778696 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-26lpd" event={"ID":"21368668-5984-4a3f-915a-06c8a959fefd","Type":"ContainerStarted","Data":"0d09bb4a55fe7a0e15903b22e53a031c1c2de5e3395d85858f45d13158fe1b6e"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.785286 4727 generic.go:334] "Generic (PLEG): container finished" podID="4da2f3f3-a035-4d39-8756-6eab26cef3dc" containerID="873311e1f2d89f86612be9947cce3cb1d9b15a3f9281ac6ee5aa7010cb805877" exitCode=0 Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.785409 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" event={"ID":"4da2f3f3-a035-4d39-8756-6eab26cef3dc","Type":"ContainerDied","Data":"873311e1f2d89f86612be9947cce3cb1d9b15a3f9281ac6ee5aa7010cb805877"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.787534 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2ff4-account-create-ds2tt" event={"ID":"57e425f9-b08d-412d-9491-f1ffe7e5d54f","Type":"ContainerStarted","Data":"1a3766274006b963ea7fd0cff1c1eeacc4f82efdb22d5a839e4f9d0c33cd05cf"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.792952 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c5d5d79b4-sdspk" event={"ID":"2b697061-39b3-440d-bc9c-e31848742e7d","Type":"ContainerStarted","Data":"bfecfc8b02461c02300cb6da1daf5ea10a63b5094db50fedbae607faafd6bb12"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.802104 4727 generic.go:334] "Generic (PLEG): container finished" podID="436b53f0-6ba8-42e8-89eb-f853b1308cbf" containerID="e063c9647e9d3f1e3de8290d9a1bd8b337fb2fb30d664373c5fe6c83b14de23f" exitCode=0 Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.802194 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xfpm2" event={"ID":"436b53f0-6ba8-42e8-89eb-f853b1308cbf","Type":"ContainerDied","Data":"e063c9647e9d3f1e3de8290d9a1bd8b337fb2fb30d664373c5fe6c83b14de23f"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.802247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xfpm2" event={"ID":"436b53f0-6ba8-42e8-89eb-f853b1308cbf","Type":"ContainerStarted","Data":"1dd0aa634b99ceeb27bbfb01e246e72f4d35d04ff57d766e4752bcfd9815cff0"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.806266 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-005d-account-create-g2f2m" event={"ID":"93d541d6-cea3-49c4-9f5c-d0484de3fafb","Type":"ContainerStarted","Data":"6069324b7bd7ec47c81ee2084da0a2f740cf4711b6a882b74efdedb69ed91369"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.815978 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xhxc8" event={"ID":"f8183461-41a5-4d09-aeef-f3d431e6d71b","Type":"ContainerStarted","Data":"ccd9b00a1f4907b41fc0a43b80ee1742986ef1ce6c62b1ab0a9c6abf5507bcd3"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.816021 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xhxc8" event={"ID":"f8183461-41a5-4d09-aeef-f3d431e6d71b","Type":"ContainerStarted","Data":"52f73943947d05ba451a8bfbd257e77b85aa55eaa49befe010c657e5b78bf40d"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.823573 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ae35-account-create-tdvf7" event={"ID":"f0422bc7-ea04-4d95-b240-608a1c0d16ec","Type":"ContainerStarted","Data":"c4792424498a4d7c12ad975a1ffa23e0381046c0099a9b4e893bc51e1bbb9c51"} Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.894383 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:23 crc kubenswrapper[4727]: I1121 20:28:23.894457 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-9c89d96c5-h6lxw" Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.055231 4727 generic.go:334] "Generic (PLEG): container finished" podID="57e425f9-b08d-412d-9491-f1ffe7e5d54f" containerID="e71d403f99a706c7f7f28224c7c1725779fc0616cfc86396f02a2e4254c11c2e" exitCode=0 Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.055304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2ff4-account-create-ds2tt" event={"ID":"57e425f9-b08d-412d-9491-f1ffe7e5d54f","Type":"ContainerDied","Data":"e71d403f99a706c7f7f28224c7c1725779fc0616cfc86396f02a2e4254c11c2e"} Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.057993 4727 generic.go:334] "Generic (PLEG): container finished" podID="93d541d6-cea3-49c4-9f5c-d0484de3fafb" containerID="04da160c7d528faa655831430f71cf90cad4ddac09dcba2d1bbb71dee3c8c54d" exitCode=0 Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.058087 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-005d-account-create-g2f2m" event={"ID":"93d541d6-cea3-49c4-9f5c-d0484de3fafb","Type":"ContainerDied","Data":"04da160c7d528faa655831430f71cf90cad4ddac09dcba2d1bbb71dee3c8c54d"} Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.061091 4727 generic.go:334] "Generic (PLEG): container finished" podID="f8183461-41a5-4d09-aeef-f3d431e6d71b" containerID="ccd9b00a1f4907b41fc0a43b80ee1742986ef1ce6c62b1ab0a9c6abf5507bcd3" exitCode=0 Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.061188 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xhxc8" event={"ID":"f8183461-41a5-4d09-aeef-f3d431e6d71b","Type":"ContainerDied","Data":"ccd9b00a1f4907b41fc0a43b80ee1742986ef1ce6c62b1ab0a9c6abf5507bcd3"} Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.064503 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" event={"ID":"4da2f3f3-a035-4d39-8756-6eab26cef3dc","Type":"ContainerStarted","Data":"b8f0c9890b1c660f63eeb03ee2bb94cb4dfdaf16705e410a2644ae48b11d3fec"} Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.064625 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.069824 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"242aec5e-01e3-4559-9031-40a5c69c5f0a","Type":"ContainerStarted","Data":"01b66834d59ef6c1f21e8d2c21ddcb3c1278331218ba86dbcd7813be2c0c6a5b"} Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.071925 4727 generic.go:334] "Generic (PLEG): container finished" podID="f0422bc7-ea04-4d95-b240-608a1c0d16ec" containerID="365e5324291bac51a1b24ca68c394eb35d309620a5f5bba3c4b5121f53c50e89" exitCode=0 Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.071979 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ae35-account-create-tdvf7" event={"ID":"f0422bc7-ea04-4d95-b240-608a1c0d16ec","Type":"ContainerDied","Data":"365e5324291bac51a1b24ca68c394eb35d309620a5f5bba3c4b5121f53c50e89"} Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.108185 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.108163815 podStartE2EDuration="6.108163815s" podCreationTimestamp="2025-11-21 20:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:25.107887739 +0000 UTC m=+1310.294072793" watchObservedRunningTime="2025-11-21 20:28:25.108163815 +0000 UTC m=+1310.294348870" Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.179757 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" podStartSLOduration=5.179723786 podStartE2EDuration="5.179723786s" podCreationTimestamp="2025-11-21 20:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:25.158708058 +0000 UTC m=+1310.344893102" watchObservedRunningTime="2025-11-21 20:28:25.179723786 +0000 UTC m=+1310.365908840" Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.706469 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xhxc8" Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.773947 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8183461-41a5-4d09-aeef-f3d431e6d71b-operator-scripts\") pod \"f8183461-41a5-4d09-aeef-f3d431e6d71b\" (UID: \"f8183461-41a5-4d09-aeef-f3d431e6d71b\") " Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.774237 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qv2h\" (UniqueName: \"kubernetes.io/projected/f8183461-41a5-4d09-aeef-f3d431e6d71b-kube-api-access-6qv2h\") pod \"f8183461-41a5-4d09-aeef-f3d431e6d71b\" (UID: \"f8183461-41a5-4d09-aeef-f3d431e6d71b\") " Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.776051 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8183461-41a5-4d09-aeef-f3d431e6d71b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8183461-41a5-4d09-aeef-f3d431e6d71b" (UID: "f8183461-41a5-4d09-aeef-f3d431e6d71b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.798199 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8183461-41a5-4d09-aeef-f3d431e6d71b-kube-api-access-6qv2h" (OuterVolumeSpecName: "kube-api-access-6qv2h") pod "f8183461-41a5-4d09-aeef-f3d431e6d71b" (UID: "f8183461-41a5-4d09-aeef-f3d431e6d71b"). InnerVolumeSpecName "kube-api-access-6qv2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.876681 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qv2h\" (UniqueName: \"kubernetes.io/projected/f8183461-41a5-4d09-aeef-f3d431e6d71b-kube-api-access-6qv2h\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:25 crc kubenswrapper[4727]: I1121 20:28:25.876754 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8183461-41a5-4d09-aeef-f3d431e6d71b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:26 crc kubenswrapper[4727]: I1121 20:28:26.089543 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xhxc8" event={"ID":"f8183461-41a5-4d09-aeef-f3d431e6d71b","Type":"ContainerDied","Data":"52f73943947d05ba451a8bfbd257e77b85aa55eaa49befe010c657e5b78bf40d"} Nov 21 20:28:26 crc kubenswrapper[4727]: I1121 20:28:26.089597 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52f73943947d05ba451a8bfbd257e77b85aa55eaa49befe010c657e5b78bf40d" Nov 21 20:28:26 crc kubenswrapper[4727]: I1121 20:28:26.090651 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xhxc8" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.157164 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ae35-account-create-tdvf7" event={"ID":"f0422bc7-ea04-4d95-b240-608a1c0d16ec","Type":"ContainerDied","Data":"c4792424498a4d7c12ad975a1ffa23e0381046c0099a9b4e893bc51e1bbb9c51"} Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.157488 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4792424498a4d7c12ad975a1ffa23e0381046c0099a9b4e893bc51e1bbb9c51" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.160310 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2ff4-account-create-ds2tt" event={"ID":"57e425f9-b08d-412d-9491-f1ffe7e5d54f","Type":"ContainerDied","Data":"1a3766274006b963ea7fd0cff1c1eeacc4f82efdb22d5a839e4f9d0c33cd05cf"} Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.160333 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a3766274006b963ea7fd0cff1c1eeacc4f82efdb22d5a839e4f9d0c33cd05cf" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.163353 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xfpm2" event={"ID":"436b53f0-6ba8-42e8-89eb-f853b1308cbf","Type":"ContainerDied","Data":"1dd0aa634b99ceeb27bbfb01e246e72f4d35d04ff57d766e4752bcfd9815cff0"} Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.163411 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dd0aa634b99ceeb27bbfb01e246e72f4d35d04ff57d766e4752bcfd9815cff0" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.166566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-005d-account-create-g2f2m" event={"ID":"93d541d6-cea3-49c4-9f5c-d0484de3fafb","Type":"ContainerDied","Data":"6069324b7bd7ec47c81ee2084da0a2f740cf4711b6a882b74efdedb69ed91369"} Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.166596 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6069324b7bd7ec47c81ee2084da0a2f740cf4711b6a882b74efdedb69ed91369" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.172862 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-26lpd" event={"ID":"21368668-5984-4a3f-915a-06c8a959fefd","Type":"ContainerDied","Data":"0d09bb4a55fe7a0e15903b22e53a031c1c2de5e3395d85858f45d13158fe1b6e"} Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.172903 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d09bb4a55fe7a0e15903b22e53a031c1c2de5e3395d85858f45d13158fe1b6e" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.280606 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-005d-account-create-g2f2m" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.345562 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75bcn\" (UniqueName: \"kubernetes.io/projected/93d541d6-cea3-49c4-9f5c-d0484de3fafb-kube-api-access-75bcn\") pod \"93d541d6-cea3-49c4-9f5c-d0484de3fafb\" (UID: \"93d541d6-cea3-49c4-9f5c-d0484de3fafb\") " Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.345663 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d541d6-cea3-49c4-9f5c-d0484de3fafb-operator-scripts\") pod \"93d541d6-cea3-49c4-9f5c-d0484de3fafb\" (UID: \"93d541d6-cea3-49c4-9f5c-d0484de3fafb\") " Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.349620 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d541d6-cea3-49c4-9f5c-d0484de3fafb-kube-api-access-75bcn" (OuterVolumeSpecName: "kube-api-access-75bcn") pod "93d541d6-cea3-49c4-9f5c-d0484de3fafb" (UID: "93d541d6-cea3-49c4-9f5c-d0484de3fafb"). InnerVolumeSpecName "kube-api-access-75bcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.353150 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d541d6-cea3-49c4-9f5c-d0484de3fafb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93d541d6-cea3-49c4-9f5c-d0484de3fafb" (UID: "93d541d6-cea3-49c4-9f5c-d0484de3fafb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.356822 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-26lpd" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.424467 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xfpm2" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.447090 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21368668-5984-4a3f-915a-06c8a959fefd-operator-scripts\") pod \"21368668-5984-4a3f-915a-06c8a959fefd\" (UID: \"21368668-5984-4a3f-915a-06c8a959fefd\") " Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.447333 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fvps\" (UniqueName: \"kubernetes.io/projected/21368668-5984-4a3f-915a-06c8a959fefd-kube-api-access-9fvps\") pod \"21368668-5984-4a3f-915a-06c8a959fefd\" (UID: \"21368668-5984-4a3f-915a-06c8a959fefd\") " Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.447840 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75bcn\" (UniqueName: \"kubernetes.io/projected/93d541d6-cea3-49c4-9f5c-d0484de3fafb-kube-api-access-75bcn\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.447986 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93d541d6-cea3-49c4-9f5c-d0484de3fafb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.451214 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21368668-5984-4a3f-915a-06c8a959fefd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21368668-5984-4a3f-915a-06c8a959fefd" (UID: "21368668-5984-4a3f-915a-06c8a959fefd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.467134 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2ff4-account-create-ds2tt" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.473431 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21368668-5984-4a3f-915a-06c8a959fefd-kube-api-access-9fvps" (OuterVolumeSpecName: "kube-api-access-9fvps") pod "21368668-5984-4a3f-915a-06c8a959fefd" (UID: "21368668-5984-4a3f-915a-06c8a959fefd"). InnerVolumeSpecName "kube-api-access-9fvps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.478180 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae35-account-create-tdvf7" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.549049 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2czdf\" (UniqueName: \"kubernetes.io/projected/f0422bc7-ea04-4d95-b240-608a1c0d16ec-kube-api-access-2czdf\") pod \"f0422bc7-ea04-4d95-b240-608a1c0d16ec\" (UID: \"f0422bc7-ea04-4d95-b240-608a1c0d16ec\") " Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.549108 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/436b53f0-6ba8-42e8-89eb-f853b1308cbf-operator-scripts\") pod \"436b53f0-6ba8-42e8-89eb-f853b1308cbf\" (UID: \"436b53f0-6ba8-42e8-89eb-f853b1308cbf\") " Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.549169 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbjtc\" (UniqueName: \"kubernetes.io/projected/57e425f9-b08d-412d-9491-f1ffe7e5d54f-kube-api-access-kbjtc\") pod \"57e425f9-b08d-412d-9491-f1ffe7e5d54f\" (UID: \"57e425f9-b08d-412d-9491-f1ffe7e5d54f\") " Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.549255 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0422bc7-ea04-4d95-b240-608a1c0d16ec-operator-scripts\") pod \"f0422bc7-ea04-4d95-b240-608a1c0d16ec\" (UID: \"f0422bc7-ea04-4d95-b240-608a1c0d16ec\") " Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.549897 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436b53f0-6ba8-42e8-89eb-f853b1308cbf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "436b53f0-6ba8-42e8-89eb-f853b1308cbf" (UID: "436b53f0-6ba8-42e8-89eb-f853b1308cbf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.550191 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0422bc7-ea04-4d95-b240-608a1c0d16ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0422bc7-ea04-4d95-b240-608a1c0d16ec" (UID: "f0422bc7-ea04-4d95-b240-608a1c0d16ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.549350 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4cqb\" (UniqueName: \"kubernetes.io/projected/436b53f0-6ba8-42e8-89eb-f853b1308cbf-kube-api-access-s4cqb\") pod \"436b53f0-6ba8-42e8-89eb-f853b1308cbf\" (UID: \"436b53f0-6ba8-42e8-89eb-f853b1308cbf\") " Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.550269 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e425f9-b08d-412d-9491-f1ffe7e5d54f-operator-scripts\") pod \"57e425f9-b08d-412d-9491-f1ffe7e5d54f\" (UID: \"57e425f9-b08d-412d-9491-f1ffe7e5d54f\") " Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.551000 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fvps\" (UniqueName: \"kubernetes.io/projected/21368668-5984-4a3f-915a-06c8a959fefd-kube-api-access-9fvps\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.551013 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/436b53f0-6ba8-42e8-89eb-f853b1308cbf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.551023 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0422bc7-ea04-4d95-b240-608a1c0d16ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.551031 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21368668-5984-4a3f-915a-06c8a959fefd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.551419 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e425f9-b08d-412d-9491-f1ffe7e5d54f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57e425f9-b08d-412d-9491-f1ffe7e5d54f" (UID: "57e425f9-b08d-412d-9491-f1ffe7e5d54f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.558644 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e425f9-b08d-412d-9491-f1ffe7e5d54f-kube-api-access-kbjtc" (OuterVolumeSpecName: "kube-api-access-kbjtc") pod "57e425f9-b08d-412d-9491-f1ffe7e5d54f" (UID: "57e425f9-b08d-412d-9491-f1ffe7e5d54f"). InnerVolumeSpecName "kube-api-access-kbjtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.561175 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436b53f0-6ba8-42e8-89eb-f853b1308cbf-kube-api-access-s4cqb" (OuterVolumeSpecName: "kube-api-access-s4cqb") pod "436b53f0-6ba8-42e8-89eb-f853b1308cbf" (UID: "436b53f0-6ba8-42e8-89eb-f853b1308cbf"). InnerVolumeSpecName "kube-api-access-s4cqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.566067 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0422bc7-ea04-4d95-b240-608a1c0d16ec-kube-api-access-2czdf" (OuterVolumeSpecName: "kube-api-access-2czdf") pod "f0422bc7-ea04-4d95-b240-608a1c0d16ec" (UID: "f0422bc7-ea04-4d95-b240-608a1c0d16ec"). InnerVolumeSpecName "kube-api-access-2czdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.654310 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e425f9-b08d-412d-9491-f1ffe7e5d54f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.654354 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2czdf\" (UniqueName: \"kubernetes.io/projected/f0422bc7-ea04-4d95-b240-608a1c0d16ec-kube-api-access-2czdf\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.654372 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbjtc\" (UniqueName: \"kubernetes.io/projected/57e425f9-b08d-412d-9491-f1ffe7e5d54f-kube-api-access-kbjtc\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.654385 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4cqb\" (UniqueName: \"kubernetes.io/projected/436b53f0-6ba8-42e8-89eb-f853b1308cbf-kube-api-access-s4cqb\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.817294 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.817360 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.865124 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 20:28:27 crc kubenswrapper[4727]: I1121 20:28:27.887918 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.188421 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" event={"ID":"5f22c01e-e191-42ab-8e0e-f678abc961b2","Type":"ContainerStarted","Data":"b43873a7b03f03aa9d59b39061f983e6c9338fd44b22a32d0bf7ac1a4ce5143c"} Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.190320 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.192134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c5d5d79b4-sdspk" event={"ID":"2b697061-39b3-440d-bc9c-e31848742e7d","Type":"ContainerStarted","Data":"3cad772193ce5e44ab8e16cd7600a46b08f2c276072e2312dbde7e88bdf253c2"} Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.193119 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.197602 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xfpm2" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.197654 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ae35-account-create-tdvf7" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.197714 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"413b8645-2f23-45ae-803d-bc0c140ad29f","Type":"ContainerStarted","Data":"0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d"} Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.197782 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-005d-account-create-g2f2m" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.198173 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-26lpd" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.199199 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2ff4-account-create-ds2tt" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.200214 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="ceilometer-central-agent" containerID="cri-o://57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23" gracePeriod=30 Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.202076 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.202147 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.202149 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="sg-core" containerID="cri-o://4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255" gracePeriod=30 Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.202250 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="proxy-httpd" containerID="cri-o://0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d" gracePeriod=30 Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.202268 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="ceilometer-notification-agent" containerID="cri-o://a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0" gracePeriod=30 Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.225538 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" podStartSLOduration=3.673249636 podStartE2EDuration="8.225520191s" podCreationTimestamp="2025-11-21 20:28:20 +0000 UTC" firstStartedPulling="2025-11-21 20:28:22.51836102 +0000 UTC m=+1307.704546064" lastFinishedPulling="2025-11-21 20:28:27.070631585 +0000 UTC m=+1312.256816619" observedRunningTime="2025-11-21 20:28:28.220660974 +0000 UTC m=+1313.406846018" watchObservedRunningTime="2025-11-21 20:28:28.225520191 +0000 UTC m=+1313.411705235" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.258158 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6c5d5d79b4-sdspk" podStartSLOduration=4.078230803 podStartE2EDuration="8.258136691s" podCreationTimestamp="2025-11-21 20:28:20 +0000 UTC" firstStartedPulling="2025-11-21 20:28:22.892115881 +0000 UTC m=+1308.078300925" lastFinishedPulling="2025-11-21 20:28:27.072021759 +0000 UTC m=+1312.258206813" observedRunningTime="2025-11-21 20:28:28.250643639 +0000 UTC m=+1313.436828683" watchObservedRunningTime="2025-11-21 20:28:28.258136691 +0000 UTC m=+1313.444321735" Nov 21 20:28:28 crc kubenswrapper[4727]: I1121 20:28:28.285328 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.676464447 podStartE2EDuration="11.285312198s" podCreationTimestamp="2025-11-21 20:28:17 +0000 UTC" firstStartedPulling="2025-11-21 20:28:18.463559468 +0000 UTC m=+1303.649744512" lastFinishedPulling="2025-11-21 20:28:27.072407219 +0000 UTC m=+1312.258592263" observedRunningTime="2025-11-21 20:28:28.277384035 +0000 UTC m=+1313.463569079" watchObservedRunningTime="2025-11-21 20:28:28.285312198 +0000 UTC m=+1313.471497242" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.036947 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.238870 4727 generic.go:334] "Generic (PLEG): container finished" podID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerID="0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d" exitCode=0 Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.238902 4727 generic.go:334] "Generic (PLEG): container finished" podID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerID="4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255" exitCode=2 Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.238912 4727 generic.go:334] "Generic (PLEG): container finished" podID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerID="a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0" exitCode=0 Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.239339 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"413b8645-2f23-45ae-803d-bc0c140ad29f","Type":"ContainerDied","Data":"0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d"} Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.239408 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"413b8645-2f23-45ae-803d-bc0c140ad29f","Type":"ContainerDied","Data":"4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255"} Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.239423 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"413b8645-2f23-45ae-803d-bc0c140ad29f","Type":"ContainerDied","Data":"a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0"} Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.461163 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6974c7d5d8-thpd4"] Nov 21 20:28:29 crc kubenswrapper[4727]: E1121 20:28:29.461936 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21368668-5984-4a3f-915a-06c8a959fefd" containerName="mariadb-database-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.461979 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="21368668-5984-4a3f-915a-06c8a959fefd" containerName="mariadb-database-create" Nov 21 20:28:29 crc kubenswrapper[4727]: E1121 20:28:29.462012 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436b53f0-6ba8-42e8-89eb-f853b1308cbf" containerName="mariadb-database-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462021 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="436b53f0-6ba8-42e8-89eb-f853b1308cbf" containerName="mariadb-database-create" Nov 21 20:28:29 crc kubenswrapper[4727]: E1121 20:28:29.462044 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0422bc7-ea04-4d95-b240-608a1c0d16ec" containerName="mariadb-account-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462052 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0422bc7-ea04-4d95-b240-608a1c0d16ec" containerName="mariadb-account-create" Nov 21 20:28:29 crc kubenswrapper[4727]: E1121 20:28:29.462067 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e425f9-b08d-412d-9491-f1ffe7e5d54f" containerName="mariadb-account-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462074 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e425f9-b08d-412d-9491-f1ffe7e5d54f" containerName="mariadb-account-create" Nov 21 20:28:29 crc kubenswrapper[4727]: E1121 20:28:29.462085 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8183461-41a5-4d09-aeef-f3d431e6d71b" containerName="mariadb-database-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462092 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8183461-41a5-4d09-aeef-f3d431e6d71b" containerName="mariadb-database-create" Nov 21 20:28:29 crc kubenswrapper[4727]: E1121 20:28:29.462107 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d541d6-cea3-49c4-9f5c-d0484de3fafb" containerName="mariadb-account-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462116 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d541d6-cea3-49c4-9f5c-d0484de3fafb" containerName="mariadb-account-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462416 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e425f9-b08d-412d-9491-f1ffe7e5d54f" containerName="mariadb-account-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462439 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8183461-41a5-4d09-aeef-f3d431e6d71b" containerName="mariadb-database-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462455 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0422bc7-ea04-4d95-b240-608a1c0d16ec" containerName="mariadb-account-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462471 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="21368668-5984-4a3f-915a-06c8a959fefd" containerName="mariadb-database-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462486 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="436b53f0-6ba8-42e8-89eb-f853b1308cbf" containerName="mariadb-database-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.462501 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d541d6-cea3-49c4-9f5c-d0484de3fafb" containerName="mariadb-account-create" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.463506 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.481532 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6974c7d5d8-thpd4"] Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.510544 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7z79\" (UniqueName: \"kubernetes.io/projected/c702948c-e12e-47a2-a3b4-4d768c50ab5f-kube-api-access-c7z79\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.510713 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.510788 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data-custom\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.510855 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-combined-ca-bundle\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.554150 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-847b7c5b5b-jfkv4"] Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.559203 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.562651 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6c7db4b498-hshbj"] Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.564225 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.592684 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-847b7c5b5b-jfkv4"] Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.604450 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c7db4b498-hshbj"] Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.612988 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613035 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-combined-ca-bundle\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613096 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613116 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data-custom\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data-custom\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613181 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzw5l\" (UniqueName: \"kubernetes.io/projected/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-kube-api-access-bzw5l\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613218 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data-custom\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613277 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxz4x\" (UniqueName: \"kubernetes.io/projected/99f58971-6e29-4870-b6b4-4768e2676799-kube-api-access-gxz4x\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613304 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-combined-ca-bundle\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613321 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613421 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7z79\" (UniqueName: \"kubernetes.io/projected/c702948c-e12e-47a2-a3b4-4d768c50ab5f-kube-api-access-c7z79\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.613451 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-combined-ca-bundle\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.624781 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-combined-ca-bundle\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.635276 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data-custom\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.636893 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.637742 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7z79\" (UniqueName: \"kubernetes.io/projected/c702948c-e12e-47a2-a3b4-4d768c50ab5f-kube-api-access-c7z79\") pod \"heat-engine-6974c7d5d8-thpd4\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.715548 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.715597 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-combined-ca-bundle\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.715645 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data-custom\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.715673 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data-custom\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.715693 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzw5l\" (UniqueName: \"kubernetes.io/projected/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-kube-api-access-bzw5l\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.715755 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxz4x\" (UniqueName: \"kubernetes.io/projected/99f58971-6e29-4870-b6b4-4768e2676799-kube-api-access-gxz4x\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.715778 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.715842 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-combined-ca-bundle\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.720486 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-combined-ca-bundle\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.721104 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data-custom\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.723232 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.723235 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data-custom\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.723576 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-combined-ca-bundle\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.732877 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.734919 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxz4x\" (UniqueName: \"kubernetes.io/projected/99f58971-6e29-4870-b6b4-4768e2676799-kube-api-access-gxz4x\") pod \"heat-cfnapi-847b7c5b5b-jfkv4\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.740677 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzw5l\" (UniqueName: \"kubernetes.io/projected/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-kube-api-access-bzw5l\") pod \"heat-api-6c7db4b498-hshbj\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.791539 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.839221 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.839271 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.876588 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.892618 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.897812 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:29 crc kubenswrapper[4727]: I1121 20:28:29.934445 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:30 crc kubenswrapper[4727]: I1121 20:28:30.251533 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:30 crc kubenswrapper[4727]: I1121 20:28:30.252282 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:30 crc kubenswrapper[4727]: I1121 20:28:30.479012 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6974c7d5d8-thpd4"] Nov 21 20:28:30 crc kubenswrapper[4727]: I1121 20:28:30.608198 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c7db4b498-hshbj"] Nov 21 20:28:30 crc kubenswrapper[4727]: I1121 20:28:30.777675 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-847b7c5b5b-jfkv4"] Nov 21 20:28:30 crc kubenswrapper[4727]: W1121 20:28:30.783880 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99f58971_6e29_4870_b6b4_4768e2676799.slice/crio-ebf2705fb13ad8fea372f817918d2ae2ca6c180c34ce340d8c604fedae7f2b0c WatchSource:0}: Error finding container ebf2705fb13ad8fea372f817918d2ae2ca6c180c34ce340d8c604fedae7f2b0c: Status 404 returned error can't find the container with id ebf2705fb13ad8fea372f817918d2ae2ca6c180c34ce340d8c604fedae7f2b0c Nov 21 20:28:30 crc kubenswrapper[4727]: I1121 20:28:30.880982 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 20:28:30 crc kubenswrapper[4727]: I1121 20:28:30.881351 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:28:30 crc kubenswrapper[4727]: I1121 20:28:30.881833 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.263993 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c7db4b498-hshbj" event={"ID":"1d8a9a27-c894-4a70-9a9b-bd1089684c8f","Type":"ContainerStarted","Data":"002ea077af22a43a01413b788718a881f07bff6d4f8fb81c49a4a08958cb6a6f"} Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.264032 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c7db4b498-hshbj" event={"ID":"1d8a9a27-c894-4a70-9a9b-bd1089684c8f","Type":"ContainerStarted","Data":"2cdc07fa3e791a8367ffce04417b92aedcde12d37b56713ec2fe7e91ea072a9b"} Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.264063 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.266385 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6974c7d5d8-thpd4" event={"ID":"c702948c-e12e-47a2-a3b4-4d768c50ab5f","Type":"ContainerStarted","Data":"b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f"} Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.266420 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6974c7d5d8-thpd4" event={"ID":"c702948c-e12e-47a2-a3b4-4d768c50ab5f","Type":"ContainerStarted","Data":"68253a82991d8808430f12430a15875b4e4ea4afcd896bd74edd5f541d718f24"} Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.266842 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.268638 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" event={"ID":"99f58971-6e29-4870-b6b4-4768e2676799","Type":"ContainerStarted","Data":"55b37719bb8d0d0ff24b9d197a16eac9b51ce5dc12454fe5c5febf2d3687bb91"} Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.268678 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" event={"ID":"99f58971-6e29-4870-b6b4-4768e2676799","Type":"ContainerStarted","Data":"ebf2705fb13ad8fea372f817918d2ae2ca6c180c34ce340d8c604fedae7f2b0c"} Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.283099 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.294750 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6c7db4b498-hshbj" podStartSLOduration=2.294734362 podStartE2EDuration="2.294734362s" podCreationTimestamp="2025-11-21 20:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:31.28264152 +0000 UTC m=+1316.468826564" watchObservedRunningTime="2025-11-21 20:28:31.294734362 +0000 UTC m=+1316.480919406" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.323076 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" podStartSLOduration=2.323055888 podStartE2EDuration="2.323055888s" podCreationTimestamp="2025-11-21 20:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:31.301876075 +0000 UTC m=+1316.488061119" watchObservedRunningTime="2025-11-21 20:28:31.323055888 +0000 UTC m=+1316.509240932" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.374761 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6974c7d5d8-thpd4" podStartSLOduration=2.374736698 podStartE2EDuration="2.374736698s" podCreationTimestamp="2025-11-21 20:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:31.337952038 +0000 UTC m=+1316.524137102" watchObservedRunningTime="2025-11-21 20:28:31.374736698 +0000 UTC m=+1316.560921742" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.466493 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-49tk2"] Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.466739 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" podUID="44ac5098-bcbb-4981-804e-3da5706fa3cb" containerName="dnsmasq-dns" containerID="cri-o://c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88" gracePeriod=10 Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.713255 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qh8s2"] Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.715367 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.728524 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.728580 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.728613 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nnnnw" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.752996 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qh8s2"] Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.905470 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.905540 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-config-data\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.905562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-scripts\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:31 crc kubenswrapper[4727]: I1121 20:28:31.905592 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2vvz\" (UniqueName: \"kubernetes.io/projected/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-kube-api-access-x2vvz\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.009602 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.009669 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-config-data\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.009696 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-scripts\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.009732 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2vvz\" (UniqueName: \"kubernetes.io/projected/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-kube-api-access-x2vvz\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.019124 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.019553 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-scripts\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.026702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-config-data\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.029135 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2vvz\" (UniqueName: \"kubernetes.io/projected/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-kube-api-access-x2vvz\") pod \"nova-cell0-conductor-db-sync-qh8s2\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.046088 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.241285 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.326367 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-config\") pod \"44ac5098-bcbb-4981-804e-3da5706fa3cb\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.326510 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-sb\") pod \"44ac5098-bcbb-4981-804e-3da5706fa3cb\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.326557 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkjc2\" (UniqueName: \"kubernetes.io/projected/44ac5098-bcbb-4981-804e-3da5706fa3cb-kube-api-access-lkjc2\") pod \"44ac5098-bcbb-4981-804e-3da5706fa3cb\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.326583 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-nb\") pod \"44ac5098-bcbb-4981-804e-3da5706fa3cb\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.326618 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-svc\") pod \"44ac5098-bcbb-4981-804e-3da5706fa3cb\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.326645 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-swift-storage-0\") pod \"44ac5098-bcbb-4981-804e-3da5706fa3cb\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.385641 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ac5098-bcbb-4981-804e-3da5706fa3cb-kube-api-access-lkjc2" (OuterVolumeSpecName: "kube-api-access-lkjc2") pod "44ac5098-bcbb-4981-804e-3da5706fa3cb" (UID: "44ac5098-bcbb-4981-804e-3da5706fa3cb"). InnerVolumeSpecName "kube-api-access-lkjc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.417085 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c5d5d79b4-sdspk"] Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.417288 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6c5d5d79b4-sdspk" podUID="2b697061-39b3-440d-bc9c-e31848742e7d" containerName="heat-api" containerID="cri-o://3cad772193ce5e44ab8e16cd7600a46b08f2c276072e2312dbde7e88bdf253c2" gracePeriod=60 Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.432691 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkjc2\" (UniqueName: \"kubernetes.io/projected/44ac5098-bcbb-4981-804e-3da5706fa3cb-kube-api-access-lkjc2\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.449240 4727 generic.go:334] "Generic (PLEG): container finished" podID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" containerID="002ea077af22a43a01413b788718a881f07bff6d4f8fb81c49a4a08958cb6a6f" exitCode=1 Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.449340 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c7db4b498-hshbj" event={"ID":"1d8a9a27-c894-4a70-9a9b-bd1089684c8f","Type":"ContainerDied","Data":"002ea077af22a43a01413b788718a881f07bff6d4f8fb81c49a4a08958cb6a6f"} Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.450119 4727 scope.go:117] "RemoveContainer" containerID="002ea077af22a43a01413b788718a881f07bff6d4f8fb81c49a4a08958cb6a6f" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.506164 4727 generic.go:334] "Generic (PLEG): container finished" podID="44ac5098-bcbb-4981-804e-3da5706fa3cb" containerID="c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88" exitCode=0 Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.506262 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" event={"ID":"44ac5098-bcbb-4981-804e-3da5706fa3cb","Type":"ContainerDied","Data":"c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88"} Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.506290 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" event={"ID":"44ac5098-bcbb-4981-804e-3da5706fa3cb","Type":"ContainerDied","Data":"fac2ec30e6ff94cca5a6c8c9025c4b91a6eded2b75c0db4db47dafc94bd346bf"} Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.506311 4727 scope.go:117] "RemoveContainer" containerID="c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.506440 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-49tk2" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.537126 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44ac5098-bcbb-4981-804e-3da5706fa3cb" (UID: "44ac5098-bcbb-4981-804e-3da5706fa3cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.541132 4727 generic.go:334] "Generic (PLEG): container finished" podID="99f58971-6e29-4870-b6b4-4768e2676799" containerID="55b37719bb8d0d0ff24b9d197a16eac9b51ce5dc12454fe5c5febf2d3687bb91" exitCode=1 Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.541912 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" event={"ID":"99f58971-6e29-4870-b6b4-4768e2676799","Type":"ContainerDied","Data":"55b37719bb8d0d0ff24b9d197a16eac9b51ce5dc12454fe5c5febf2d3687bb91"} Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.541997 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.542007 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.543816 4727 scope.go:117] "RemoveContainer" containerID="55b37719bb8d0d0ff24b9d197a16eac9b51ce5dc12454fe5c5febf2d3687bb91" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.544482 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-nb\") pod \"44ac5098-bcbb-4981-804e-3da5706fa3cb\" (UID: \"44ac5098-bcbb-4981-804e-3da5706fa3cb\") " Nov 21 20:28:32 crc kubenswrapper[4727]: W1121 20:28:32.548516 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/44ac5098-bcbb-4981-804e-3da5706fa3cb/volumes/kubernetes.io~configmap/ovsdbserver-nb Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.548550 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44ac5098-bcbb-4981-804e-3da5706fa3cb" (UID: "44ac5098-bcbb-4981-804e-3da5706fa3cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.548769 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-config" (OuterVolumeSpecName: "config") pod "44ac5098-bcbb-4981-804e-3da5706fa3cb" (UID: "44ac5098-bcbb-4981-804e-3da5706fa3cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.572017 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66b5cc9484-l2wsd"] Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.572264 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" podUID="5f22c01e-e191-42ab-8e0e-f678abc961b2" containerName="heat-cfnapi" containerID="cri-o://b43873a7b03f03aa9d59b39061f983e6c9338fd44b22a32d0bf7ac1a4ce5143c" gracePeriod=60 Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.573235 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.573268 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.588509 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5f6b556667-ldkfv"] Nov 21 20:28:32 crc kubenswrapper[4727]: E1121 20:28:32.589045 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ac5098-bcbb-4981-804e-3da5706fa3cb" containerName="dnsmasq-dns" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.589062 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ac5098-bcbb-4981-804e-3da5706fa3cb" containerName="dnsmasq-dns" Nov 21 20:28:32 crc kubenswrapper[4727]: E1121 20:28:32.589108 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ac5098-bcbb-4981-804e-3da5706fa3cb" containerName="init" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.589115 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ac5098-bcbb-4981-804e-3da5706fa3cb" containerName="init" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.589325 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ac5098-bcbb-4981-804e-3da5706fa3cb" containerName="dnsmasq-dns" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.590158 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.594597 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44ac5098-bcbb-4981-804e-3da5706fa3cb" (UID: "44ac5098-bcbb-4981-804e-3da5706fa3cb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.594718 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.611704 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.617871 4727 scope.go:117] "RemoveContainer" containerID="b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.675537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-public-tls-certs\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.675597 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data-custom\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.675623 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.675698 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-internal-tls-certs\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.675717 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdr4w\" (UniqueName: \"kubernetes.io/projected/bcab1adc-1286-4341-aebc-4a4c0821aba8-kube-api-access-qdr4w\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.675844 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-combined-ca-bundle\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.675942 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.696272 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5f6b556667-ldkfv"] Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.741681 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" podUID="5f22c01e-e191-42ab-8e0e-f678abc961b2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.215:8000/healthcheck\": EOF" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.757698 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44ac5098-bcbb-4981-804e-3da5706fa3cb" (UID: "44ac5098-bcbb-4981-804e-3da5706fa3cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.777241 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-internal-tls-certs\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.777281 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdr4w\" (UniqueName: \"kubernetes.io/projected/bcab1adc-1286-4341-aebc-4a4c0821aba8-kube-api-access-qdr4w\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.777365 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-combined-ca-bundle\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.777439 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-public-tls-certs\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.777465 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data-custom\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.777486 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.777559 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.792175 4727 scope.go:117] "RemoveContainer" containerID="c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88" Nov 21 20:28:32 crc kubenswrapper[4727]: E1121 20:28:32.793055 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88\": container with ID starting with c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88 not found: ID does not exist" containerID="c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.793095 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88"} err="failed to get container status \"c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88\": rpc error: code = NotFound desc = could not find container \"c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88\": container with ID starting with c649f81dc3ba39d640f0b2c46623595ca63eb68c676958a35bc738320ce4ae88 not found: ID does not exist" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.793122 4727 scope.go:117] "RemoveContainer" containerID="b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.804303 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7d97974c6c-c2ppt"] Nov 21 20:28:32 crc kubenswrapper[4727]: E1121 20:28:32.806360 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf\": container with ID starting with b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf not found: ID does not exist" containerID="b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.806419 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf"} err="failed to get container status \"b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf\": rpc error: code = NotFound desc = could not find container \"b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf\": container with ID starting with b159cfd8b568b3169a48b142a8e6de67751bb8f9f8146af0d5784013b350bddf not found: ID does not exist" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.809049 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-combined-ca-bundle\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.809075 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-public-tls-certs\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.810231 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.811595 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.812305 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-internal-tls-certs\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.816208 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.816337 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.822994 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d97974c6c-c2ppt"] Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.823143 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data-custom\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.827573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdr4w\" (UniqueName: \"kubernetes.io/projected/bcab1adc-1286-4341-aebc-4a4c0821aba8-kube-api-access-qdr4w\") pod \"heat-api-5f6b556667-ldkfv\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.833617 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "44ac5098-bcbb-4981-804e-3da5706fa3cb" (UID: "44ac5098-bcbb-4981-804e-3da5706fa3cb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.879939 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data-custom\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.880042 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-public-tls-certs\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.880076 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-internal-tls-certs\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.880111 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zngdl\" (UniqueName: \"kubernetes.io/projected/c887cd90-b585-43da-98ff-f5c1e8fc3f70-kube-api-access-zngdl\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.880163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.880192 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-combined-ca-bundle\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.880280 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44ac5098-bcbb-4981-804e-3da5706fa3cb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.919330 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qh8s2"] Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.982340 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-public-tls-certs\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.982511 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-internal-tls-certs\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.982626 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zngdl\" (UniqueName: \"kubernetes.io/projected/c887cd90-b585-43da-98ff-f5c1e8fc3f70-kube-api-access-zngdl\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.982758 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.982862 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-combined-ca-bundle\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.983055 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data-custom\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.987872 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-public-tls-certs\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.989767 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-combined-ca-bundle\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.993705 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data-custom\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.994747 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:32 crc kubenswrapper[4727]: I1121 20:28:32.995254 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-internal-tls-certs\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.006739 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zngdl\" (UniqueName: \"kubernetes.io/projected/c887cd90-b585-43da-98ff-f5c1e8fc3f70-kube-api-access-zngdl\") pod \"heat-cfnapi-7d97974c6c-c2ppt\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.019307 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.122865 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.219848 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-49tk2"] Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.230792 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-49tk2"] Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.277200 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6c5d5d79b4-sdspk" podUID="2b697061-39b3-440d-bc9c-e31848742e7d" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.216:8004/healthcheck\": read tcp 10.217.0.2:55588->10.217.0.216:8004: read: connection reset by peer" Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.514527 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ac5098-bcbb-4981-804e-3da5706fa3cb" path="/var/lib/kubelet/pods/44ac5098-bcbb-4981-804e-3da5706fa3cb/volumes" Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.586089 4727 generic.go:334] "Generic (PLEG): container finished" podID="99f58971-6e29-4870-b6b4-4768e2676799" containerID="cad46f15935d60f3a09d6e195e6a51d044f778bfd2cc3780661418ad8982adeb" exitCode=1 Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.586382 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" event={"ID":"99f58971-6e29-4870-b6b4-4768e2676799","Type":"ContainerDied","Data":"cad46f15935d60f3a09d6e195e6a51d044f778bfd2cc3780661418ad8982adeb"} Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.586492 4727 scope.go:117] "RemoveContainer" containerID="55b37719bb8d0d0ff24b9d197a16eac9b51ce5dc12454fe5c5febf2d3687bb91" Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.587214 4727 scope.go:117] "RemoveContainer" containerID="cad46f15935d60f3a09d6e195e6a51d044f778bfd2cc3780661418ad8982adeb" Nov 21 20:28:33 crc kubenswrapper[4727]: E1121 20:28:33.587500 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-847b7c5b5b-jfkv4_openstack(99f58971-6e29-4870-b6b4-4768e2676799)\"" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" podUID="99f58971-6e29-4870-b6b4-4768e2676799" Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.596497 4727 generic.go:334] "Generic (PLEG): container finished" podID="2b697061-39b3-440d-bc9c-e31848742e7d" containerID="3cad772193ce5e44ab8e16cd7600a46b08f2c276072e2312dbde7e88bdf253c2" exitCode=0 Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.596561 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c5d5d79b4-sdspk" event={"ID":"2b697061-39b3-440d-bc9c-e31848742e7d","Type":"ContainerDied","Data":"3cad772193ce5e44ab8e16cd7600a46b08f2c276072e2312dbde7e88bdf253c2"} Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.597648 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5f6b556667-ldkfv"] Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.598842 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qh8s2" event={"ID":"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda","Type":"ContainerStarted","Data":"629e02fe93762310fd73cb80a0a3114ee3d91612d5cba5ab98e15c0acb26ed03"} Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.606351 4727 generic.go:334] "Generic (PLEG): container finished" podID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" containerID="3767758981b1f2c3c6ac8581ed85fc6c4495d1743787552535da663d1d931824" exitCode=1 Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.606457 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c7db4b498-hshbj" event={"ID":"1d8a9a27-c894-4a70-9a9b-bd1089684c8f","Type":"ContainerDied","Data":"3767758981b1f2c3c6ac8581ed85fc6c4495d1743787552535da663d1d931824"} Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.607537 4727 scope.go:117] "RemoveContainer" containerID="3767758981b1f2c3c6ac8581ed85fc6c4495d1743787552535da663d1d931824" Nov 21 20:28:33 crc kubenswrapper[4727]: E1121 20:28:33.607842 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c7db4b498-hshbj_openstack(1d8a9a27-c894-4a70-9a9b-bd1089684c8f)\"" pod="openstack/heat-api-6c7db4b498-hshbj" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" Nov 21 20:28:33 crc kubenswrapper[4727]: I1121 20:28:33.804924 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d97974c6c-c2ppt"] Nov 21 20:28:33 crc kubenswrapper[4727]: W1121 20:28:33.810481 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc887cd90_b585_43da_98ff_f5c1e8fc3f70.slice/crio-320a498659ab3252dafe37c71e7ac7f64589e0f4af03de195cf07226fb0f4659 WatchSource:0}: Error finding container 320a498659ab3252dafe37c71e7ac7f64589e0f4af03de195cf07226fb0f4659: Status 404 returned error can't find the container with id 320a498659ab3252dafe37c71e7ac7f64589e0f4af03de195cf07226fb0f4659 Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.008849 4727 scope.go:117] "RemoveContainer" containerID="002ea077af22a43a01413b788718a881f07bff6d4f8fb81c49a4a08958cb6a6f" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.069143 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.130245 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-combined-ca-bundle\") pod \"2b697061-39b3-440d-bc9c-e31848742e7d\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.130429 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plmlh\" (UniqueName: \"kubernetes.io/projected/2b697061-39b3-440d-bc9c-e31848742e7d-kube-api-access-plmlh\") pod \"2b697061-39b3-440d-bc9c-e31848742e7d\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.130703 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data-custom\") pod \"2b697061-39b3-440d-bc9c-e31848742e7d\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.130766 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data\") pod \"2b697061-39b3-440d-bc9c-e31848742e7d\" (UID: \"2b697061-39b3-440d-bc9c-e31848742e7d\") " Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.141459 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b697061-39b3-440d-bc9c-e31848742e7d-kube-api-access-plmlh" (OuterVolumeSpecName: "kube-api-access-plmlh") pod "2b697061-39b3-440d-bc9c-e31848742e7d" (UID: "2b697061-39b3-440d-bc9c-e31848742e7d"). InnerVolumeSpecName "kube-api-access-plmlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.151148 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2b697061-39b3-440d-bc9c-e31848742e7d" (UID: "2b697061-39b3-440d-bc9c-e31848742e7d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.177978 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b697061-39b3-440d-bc9c-e31848742e7d" (UID: "2b697061-39b3-440d-bc9c-e31848742e7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.236504 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.236535 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.236571 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plmlh\" (UniqueName: \"kubernetes.io/projected/2b697061-39b3-440d-bc9c-e31848742e7d-kube-api-access-plmlh\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.255578 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data" (OuterVolumeSpecName: "config-data") pod "2b697061-39b3-440d-bc9c-e31848742e7d" (UID: "2b697061-39b3-440d-bc9c-e31848742e7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.338285 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b697061-39b3-440d-bc9c-e31848742e7d-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.537273 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.537421 4727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.622078 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.668692 4727 scope.go:117] "RemoveContainer" containerID="cad46f15935d60f3a09d6e195e6a51d044f778bfd2cc3780661418ad8982adeb" Nov 21 20:28:34 crc kubenswrapper[4727]: E1121 20:28:34.669014 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-847b7c5b5b-jfkv4_openstack(99f58971-6e29-4870-b6b4-4768e2676799)\"" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" podUID="99f58971-6e29-4870-b6b4-4768e2676799" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.701177 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" event={"ID":"c887cd90-b585-43da-98ff-f5c1e8fc3f70","Type":"ContainerStarted","Data":"c2b6c5c3a83ef7c15d6843ed76abe0cfac2e6f72594e609e298981fb62083225"} Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.701230 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" event={"ID":"c887cd90-b585-43da-98ff-f5c1e8fc3f70","Type":"ContainerStarted","Data":"320a498659ab3252dafe37c71e7ac7f64589e0f4af03de195cf07226fb0f4659"} Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.702153 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.716913 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c5d5d79b4-sdspk" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.716927 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c5d5d79b4-sdspk" event={"ID":"2b697061-39b3-440d-bc9c-e31848742e7d","Type":"ContainerDied","Data":"bfecfc8b02461c02300cb6da1daf5ea10a63b5094db50fedbae607faafd6bb12"} Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.717021 4727 scope.go:117] "RemoveContainer" containerID="3cad772193ce5e44ab8e16cd7600a46b08f2c276072e2312dbde7e88bdf253c2" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.724020 4727 scope.go:117] "RemoveContainer" containerID="3767758981b1f2c3c6ac8581ed85fc6c4495d1743787552535da663d1d931824" Nov 21 20:28:34 crc kubenswrapper[4727]: E1121 20:28:34.724285 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c7db4b498-hshbj_openstack(1d8a9a27-c894-4a70-9a9b-bd1089684c8f)\"" pod="openstack/heat-api-6c7db4b498-hshbj" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.747133 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" podStartSLOduration=2.747099222 podStartE2EDuration="2.747099222s" podCreationTimestamp="2025-11-21 20:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:34.738206517 +0000 UTC m=+1319.924391551" watchObservedRunningTime="2025-11-21 20:28:34.747099222 +0000 UTC m=+1319.933284266" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.758690 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f6b556667-ldkfv" event={"ID":"bcab1adc-1286-4341-aebc-4a4c0821aba8","Type":"ContainerStarted","Data":"03596a81d87b234950cc47c01d71b0c36edf7f8e7420dec9e3e84eab02eaae31"} Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.758792 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.758805 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f6b556667-ldkfv" event={"ID":"bcab1adc-1286-4341-aebc-4a4c0821aba8","Type":"ContainerStarted","Data":"e35938289c869c28e1294f063e9699e17dee55d82dd6367726c9d285572257ac"} Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.802475 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c5d5d79b4-sdspk"] Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.849034 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6c5d5d79b4-sdspk"] Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.867976 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5f6b556667-ldkfv" podStartSLOduration=2.867934005 podStartE2EDuration="2.867934005s" podCreationTimestamp="2025-11-21 20:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:34.83134893 +0000 UTC m=+1320.017533974" watchObservedRunningTime="2025-11-21 20:28:34.867934005 +0000 UTC m=+1320.054119059" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.894040 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.894098 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.898626 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:34 crc kubenswrapper[4727]: I1121 20:28:34.900141 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:35 crc kubenswrapper[4727]: I1121 20:28:35.551699 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b697061-39b3-440d-bc9c-e31848742e7d" path="/var/lib/kubelet/pods/2b697061-39b3-440d-bc9c-e31848742e7d/volumes" Nov 21 20:28:35 crc kubenswrapper[4727]: I1121 20:28:35.772988 4727 scope.go:117] "RemoveContainer" containerID="cad46f15935d60f3a09d6e195e6a51d044f778bfd2cc3780661418ad8982adeb" Nov 21 20:28:35 crc kubenswrapper[4727]: E1121 20:28:35.773309 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-847b7c5b5b-jfkv4_openstack(99f58971-6e29-4870-b6b4-4768e2676799)\"" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" podUID="99f58971-6e29-4870-b6b4-4768e2676799" Nov 21 20:28:35 crc kubenswrapper[4727]: I1121 20:28:35.773401 4727 scope.go:117] "RemoveContainer" containerID="3767758981b1f2c3c6ac8581ed85fc6c4495d1743787552535da663d1d931824" Nov 21 20:28:35 crc kubenswrapper[4727]: E1121 20:28:35.773637 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c7db4b498-hshbj_openstack(1d8a9a27-c894-4a70-9a9b-bd1089684c8f)\"" pod="openstack/heat-api-6c7db4b498-hshbj" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.381748 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.506151 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-sg-core-conf-yaml\") pod \"413b8645-2f23-45ae-803d-bc0c140ad29f\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.506213 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-log-httpd\") pod \"413b8645-2f23-45ae-803d-bc0c140ad29f\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.506438 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-config-data\") pod \"413b8645-2f23-45ae-803d-bc0c140ad29f\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.506461 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-combined-ca-bundle\") pod \"413b8645-2f23-45ae-803d-bc0c140ad29f\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.506503 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-scripts\") pod \"413b8645-2f23-45ae-803d-bc0c140ad29f\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.506534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cf48\" (UniqueName: \"kubernetes.io/projected/413b8645-2f23-45ae-803d-bc0c140ad29f-kube-api-access-8cf48\") pod \"413b8645-2f23-45ae-803d-bc0c140ad29f\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.506618 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-run-httpd\") pod \"413b8645-2f23-45ae-803d-bc0c140ad29f\" (UID: \"413b8645-2f23-45ae-803d-bc0c140ad29f\") " Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.507297 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "413b8645-2f23-45ae-803d-bc0c140ad29f" (UID: "413b8645-2f23-45ae-803d-bc0c140ad29f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.507314 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "413b8645-2f23-45ae-803d-bc0c140ad29f" (UID: "413b8645-2f23-45ae-803d-bc0c140ad29f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.512230 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-scripts" (OuterVolumeSpecName: "scripts") pod "413b8645-2f23-45ae-803d-bc0c140ad29f" (UID: "413b8645-2f23-45ae-803d-bc0c140ad29f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.512694 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/413b8645-2f23-45ae-803d-bc0c140ad29f-kube-api-access-8cf48" (OuterVolumeSpecName: "kube-api-access-8cf48") pod "413b8645-2f23-45ae-803d-bc0c140ad29f" (UID: "413b8645-2f23-45ae-803d-bc0c140ad29f"). InnerVolumeSpecName "kube-api-access-8cf48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.551061 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "413b8645-2f23-45ae-803d-bc0c140ad29f" (UID: "413b8645-2f23-45ae-803d-bc0c140ad29f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.608928 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.610267 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cf48\" (UniqueName: \"kubernetes.io/projected/413b8645-2f23-45ae-803d-bc0c140ad29f-kube-api-access-8cf48\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.610335 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.610394 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.610448 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/413b8645-2f23-45ae-803d-bc0c140ad29f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.616525 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "413b8645-2f23-45ae-803d-bc0c140ad29f" (UID: "413b8645-2f23-45ae-803d-bc0c140ad29f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.672624 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-config-data" (OuterVolumeSpecName: "config-data") pod "413b8645-2f23-45ae-803d-bc0c140ad29f" (UID: "413b8645-2f23-45ae-803d-bc0c140ad29f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.712639 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.712984 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/413b8645-2f23-45ae-803d-bc0c140ad29f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.788176 4727 generic.go:334] "Generic (PLEG): container finished" podID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerID="57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23" exitCode=0 Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.788351 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"413b8645-2f23-45ae-803d-bc0c140ad29f","Type":"ContainerDied","Data":"57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23"} Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.788820 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"413b8645-2f23-45ae-803d-bc0c140ad29f","Type":"ContainerDied","Data":"87c56eb050a62f6bb9c9b9161a52a598b11a8728ae216a15df516429d8868100"} Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.788841 4727 scope.go:117] "RemoveContainer" containerID="0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.788461 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.789745 4727 scope.go:117] "RemoveContainer" containerID="cad46f15935d60f3a09d6e195e6a51d044f778bfd2cc3780661418ad8982adeb" Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.790156 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-847b7c5b5b-jfkv4_openstack(99f58971-6e29-4870-b6b4-4768e2676799)\"" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" podUID="99f58971-6e29-4870-b6b4-4768e2676799" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.791002 4727 scope.go:117] "RemoveContainer" containerID="3767758981b1f2c3c6ac8581ed85fc6c4495d1743787552535da663d1d931824" Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.791327 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c7db4b498-hshbj_openstack(1d8a9a27-c894-4a70-9a9b-bd1089684c8f)\"" pod="openstack/heat-api-6c7db4b498-hshbj" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.818326 4727 scope.go:117] "RemoveContainer" containerID="4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.831813 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.853062 4727 scope.go:117] "RemoveContainer" containerID="a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.857280 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.872347 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.873487 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="proxy-httpd" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.873654 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="proxy-httpd" Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.873747 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="ceilometer-notification-agent" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.873810 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="ceilometer-notification-agent" Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.873871 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="ceilometer-central-agent" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.873944 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="ceilometer-central-agent" Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.874072 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="sg-core" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.874149 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="sg-core" Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.874237 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b697061-39b3-440d-bc9c-e31848742e7d" containerName="heat-api" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.874298 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b697061-39b3-440d-bc9c-e31848742e7d" containerName="heat-api" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.874638 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="ceilometer-notification-agent" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.874757 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="ceilometer-central-agent" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.874863 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="proxy-httpd" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.874932 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b697061-39b3-440d-bc9c-e31848742e7d" containerName="heat-api" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.875035 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" containerName="sg-core" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.879345 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.885022 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.885083 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.885122 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.899691 4727 scope.go:117] "RemoveContainer" containerID="57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.997087 4727 scope.go:117] "RemoveContainer" containerID="0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d" Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.997689 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d\": container with ID starting with 0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d not found: ID does not exist" containerID="0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.997725 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d"} err="failed to get container status \"0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d\": rpc error: code = NotFound desc = could not find container \"0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d\": container with ID starting with 0edc1c507031c90514586e18f1e1353ff338acc50b5f2e15f80cf4d491fd205d not found: ID does not exist" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.997749 4727 scope.go:117] "RemoveContainer" containerID="4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255" Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.997924 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255\": container with ID starting with 4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255 not found: ID does not exist" containerID="4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.997952 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255"} err="failed to get container status \"4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255\": rpc error: code = NotFound desc = could not find container \"4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255\": container with ID starting with 4a603c0fa8cc3c9131ba9ed4a536b15484cb7945546e61bc46b106bee070b255 not found: ID does not exist" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.997980 4727 scope.go:117] "RemoveContainer" containerID="a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0" Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.998198 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0\": container with ID starting with a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0 not found: ID does not exist" containerID="a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.998225 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0"} err="failed to get container status \"a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0\": rpc error: code = NotFound desc = could not find container \"a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0\": container with ID starting with a6bc975e9386bea7bf3b002da225cb31f8ad2a23845416069c2a2c70bf4f1fb0 not found: ID does not exist" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.998243 4727 scope.go:117] "RemoveContainer" containerID="57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23" Nov 21 20:28:36 crc kubenswrapper[4727]: E1121 20:28:36.998444 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23\": container with ID starting with 57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23 not found: ID does not exist" containerID="57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23" Nov 21 20:28:36 crc kubenswrapper[4727]: I1121 20:28:36.998474 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23"} err="failed to get container status \"57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23\": rpc error: code = NotFound desc = could not find container \"57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23\": container with ID starting with 57a439357d6da7ce7f4900417efb34bceaa46c480206725ededd77693bfe6a23 not found: ID does not exist" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.021189 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.021293 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.021328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-log-httpd\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.021478 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kpt2\" (UniqueName: \"kubernetes.io/projected/410a7b85-5b1e-40b6-87b4-22af86338c90-kube-api-access-6kpt2\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.021531 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-config-data\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.021716 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-run-httpd\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.021805 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-scripts\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.124470 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kpt2\" (UniqueName: \"kubernetes.io/projected/410a7b85-5b1e-40b6-87b4-22af86338c90-kube-api-access-6kpt2\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.124544 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-config-data\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.124624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-run-httpd\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.124648 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-scripts\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.124688 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.124762 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.124804 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-log-httpd\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.125371 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-log-httpd\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.125502 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-run-httpd\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.128454 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.129596 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.130222 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-config-data\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.141444 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-scripts\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.143651 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kpt2\" (UniqueName: \"kubernetes.io/projected/410a7b85-5b1e-40b6-87b4-22af86338c90-kube-api-access-6kpt2\") pod \"ceilometer-0\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.233136 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.514611 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="413b8645-2f23-45ae-803d-bc0c140ad29f" path="/var/lib/kubelet/pods/413b8645-2f23-45ae-803d-bc0c140ad29f/volumes" Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.751805 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.802286 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410a7b85-5b1e-40b6-87b4-22af86338c90","Type":"ContainerStarted","Data":"585e5f39b22cca737309495608be310d09606676e4e2490c40d90fe3c43555d1"} Nov 21 20:28:37 crc kubenswrapper[4727]: I1121 20:28:37.992254 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:38 crc kubenswrapper[4727]: I1121 20:28:38.130887 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" podUID="5f22c01e-e191-42ab-8e0e-f678abc961b2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.215:8000/healthcheck\": read tcp 10.217.0.2:34024->10.217.0.215:8000: read: connection reset by peer" Nov 21 20:28:38 crc kubenswrapper[4727]: E1121 20:28:38.437205 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f22c01e_e191_42ab_8e0e_f678abc961b2.slice/crio-conmon-b43873a7b03f03aa9d59b39061f983e6c9338fd44b22a32d0bf7ac1a4ce5143c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f22c01e_e191_42ab_8e0e_f678abc961b2.slice/crio-b43873a7b03f03aa9d59b39061f983e6c9338fd44b22a32d0bf7ac1a4ce5143c.scope\": RecentStats: unable to find data in memory cache]" Nov 21 20:28:38 crc kubenswrapper[4727]: I1121 20:28:38.819238 4727 generic.go:334] "Generic (PLEG): container finished" podID="5f22c01e-e191-42ab-8e0e-f678abc961b2" containerID="b43873a7b03f03aa9d59b39061f983e6c9338fd44b22a32d0bf7ac1a4ce5143c" exitCode=0 Nov 21 20:28:38 crc kubenswrapper[4727]: I1121 20:28:38.819282 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" event={"ID":"5f22c01e-e191-42ab-8e0e-f678abc961b2","Type":"ContainerDied","Data":"b43873a7b03f03aa9d59b39061f983e6c9338fd44b22a32d0bf7ac1a4ce5143c"} Nov 21 20:28:40 crc kubenswrapper[4727]: I1121 20:28:40.990207 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:41 crc kubenswrapper[4727]: I1121 20:28:41.391109 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" podUID="5f22c01e-e191-42ab-8e0e-f678abc961b2" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.215:8000/healthcheck\": dial tcp 10.217.0.215:8000: connect: connection refused" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.120519 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.309528 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data\") pod \"5f22c01e-e191-42ab-8e0e-f678abc961b2\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.310083 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data-custom\") pod \"5f22c01e-e191-42ab-8e0e-f678abc961b2\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.310168 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw98b\" (UniqueName: \"kubernetes.io/projected/5f22c01e-e191-42ab-8e0e-f678abc961b2-kube-api-access-dw98b\") pod \"5f22c01e-e191-42ab-8e0e-f678abc961b2\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.310317 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-combined-ca-bundle\") pod \"5f22c01e-e191-42ab-8e0e-f678abc961b2\" (UID: \"5f22c01e-e191-42ab-8e0e-f678abc961b2\") " Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.318314 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f22c01e-e191-42ab-8e0e-f678abc961b2-kube-api-access-dw98b" (OuterVolumeSpecName: "kube-api-access-dw98b") pod "5f22c01e-e191-42ab-8e0e-f678abc961b2" (UID: "5f22c01e-e191-42ab-8e0e-f678abc961b2"). InnerVolumeSpecName "kube-api-access-dw98b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.325061 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5f22c01e-e191-42ab-8e0e-f678abc961b2" (UID: "5f22c01e-e191-42ab-8e0e-f678abc961b2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.351929 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f22c01e-e191-42ab-8e0e-f678abc961b2" (UID: "5f22c01e-e191-42ab-8e0e-f678abc961b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.396811 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data" (OuterVolumeSpecName: "config-data") pod "5f22c01e-e191-42ab-8e0e-f678abc961b2" (UID: "5f22c01e-e191-42ab-8e0e-f678abc961b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.414136 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.414190 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.414211 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw98b\" (UniqueName: \"kubernetes.io/projected/5f22c01e-e191-42ab-8e0e-f678abc961b2-kube-api-access-dw98b\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.414222 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f22c01e-e191-42ab-8e0e-f678abc961b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.897772 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.897766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66b5cc9484-l2wsd" event={"ID":"5f22c01e-e191-42ab-8e0e-f678abc961b2","Type":"ContainerDied","Data":"f39998150717c1e79134ea75723e0ee569dea797b5a524973b84543541870714"} Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.898232 4727 scope.go:117] "RemoveContainer" containerID="b43873a7b03f03aa9d59b39061f983e6c9338fd44b22a32d0bf7ac1a4ce5143c" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.900127 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qh8s2" event={"ID":"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda","Type":"ContainerStarted","Data":"cebacfb63f66e9a6c0a415ccde1c9322911f3be1572378c0801d63bbe3e284a4"} Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.903311 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410a7b85-5b1e-40b6-87b4-22af86338c90","Type":"ContainerStarted","Data":"346e6ca505dcfef358a1415ce01e976aaa495d423df0a2aadc76cb082ee5a493"} Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.928719 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qh8s2" podStartSLOduration=3.041018503 podStartE2EDuration="12.928701196s" podCreationTimestamp="2025-11-21 20:28:31 +0000 UTC" firstStartedPulling="2025-11-21 20:28:32.939503208 +0000 UTC m=+1318.125688252" lastFinishedPulling="2025-11-21 20:28:42.827185911 +0000 UTC m=+1328.013370945" observedRunningTime="2025-11-21 20:28:43.926347309 +0000 UTC m=+1329.112532363" watchObservedRunningTime="2025-11-21 20:28:43.928701196 +0000 UTC m=+1329.114886240" Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.954931 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66b5cc9484-l2wsd"] Nov 21 20:28:43 crc kubenswrapper[4727]: I1121 20:28:43.968769 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-66b5cc9484-l2wsd"] Nov 21 20:28:44 crc kubenswrapper[4727]: I1121 20:28:44.446579 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 20:28:44 crc kubenswrapper[4727]: I1121 20:28:44.592228 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-847b7c5b5b-jfkv4"] Nov 21 20:28:44 crc kubenswrapper[4727]: I1121 20:28:44.773163 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 20:28:44 crc kubenswrapper[4727]: I1121 20:28:44.891017 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c7db4b498-hshbj"] Nov 21 20:28:44 crc kubenswrapper[4727]: I1121 20:28:44.939414 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410a7b85-5b1e-40b6-87b4-22af86338c90","Type":"ContainerStarted","Data":"26a7707016903bac495bb9720a0030211dbb0dea6f90951ca8c5a3c20d7f2796"} Nov 21 20:28:44 crc kubenswrapper[4727]: I1121 20:28:44.939451 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410a7b85-5b1e-40b6-87b4-22af86338c90","Type":"ContainerStarted","Data":"dafc1bc2d86fb4b96cafb014c321c5e86c507a081527c5ddbe8a948a42bc0dad"} Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.101170 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.262667 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxz4x\" (UniqueName: \"kubernetes.io/projected/99f58971-6e29-4870-b6b4-4768e2676799-kube-api-access-gxz4x\") pod \"99f58971-6e29-4870-b6b4-4768e2676799\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.262937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-combined-ca-bundle\") pod \"99f58971-6e29-4870-b6b4-4768e2676799\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.262980 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data-custom\") pod \"99f58971-6e29-4870-b6b4-4768e2676799\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.263041 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data\") pod \"99f58971-6e29-4870-b6b4-4768e2676799\" (UID: \"99f58971-6e29-4870-b6b4-4768e2676799\") " Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.267704 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f58971-6e29-4870-b6b4-4768e2676799-kube-api-access-gxz4x" (OuterVolumeSpecName: "kube-api-access-gxz4x") pod "99f58971-6e29-4870-b6b4-4768e2676799" (UID: "99f58971-6e29-4870-b6b4-4768e2676799"). InnerVolumeSpecName "kube-api-access-gxz4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.268222 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "99f58971-6e29-4870-b6b4-4768e2676799" (UID: "99f58971-6e29-4870-b6b4-4768e2676799"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.302412 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99f58971-6e29-4870-b6b4-4768e2676799" (UID: "99f58971-6e29-4870-b6b4-4768e2676799"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.326751 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data" (OuterVolumeSpecName: "config-data") pod "99f58971-6e29-4870-b6b4-4768e2676799" (UID: "99f58971-6e29-4870-b6b4-4768e2676799"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.353869 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.365801 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxz4x\" (UniqueName: \"kubernetes.io/projected/99f58971-6e29-4870-b6b4-4768e2676799-kube-api-access-gxz4x\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.365839 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.365848 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.365859 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f58971-6e29-4870-b6b4-4768e2676799-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.467217 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzw5l\" (UniqueName: \"kubernetes.io/projected/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-kube-api-access-bzw5l\") pod \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.467377 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-combined-ca-bundle\") pod \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.467836 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data\") pod \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.468224 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data-custom\") pod \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\" (UID: \"1d8a9a27-c894-4a70-9a9b-bd1089684c8f\") " Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.470157 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-kube-api-access-bzw5l" (OuterVolumeSpecName: "kube-api-access-bzw5l") pod "1d8a9a27-c894-4a70-9a9b-bd1089684c8f" (UID: "1d8a9a27-c894-4a70-9a9b-bd1089684c8f"). InnerVolumeSpecName "kube-api-access-bzw5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.476162 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d8a9a27-c894-4a70-9a9b-bd1089684c8f" (UID: "1d8a9a27-c894-4a70-9a9b-bd1089684c8f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.494174 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d8a9a27-c894-4a70-9a9b-bd1089684c8f" (UID: "1d8a9a27-c894-4a70-9a9b-bd1089684c8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.516288 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f22c01e-e191-42ab-8e0e-f678abc961b2" path="/var/lib/kubelet/pods/5f22c01e-e191-42ab-8e0e-f678abc961b2/volumes" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.567532 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data" (OuterVolumeSpecName: "config-data") pod "1d8a9a27-c894-4a70-9a9b-bd1089684c8f" (UID: "1d8a9a27-c894-4a70-9a9b-bd1089684c8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.571657 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.578135 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzw5l\" (UniqueName: \"kubernetes.io/projected/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-kube-api-access-bzw5l\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.578153 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.578165 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8a9a27-c894-4a70-9a9b-bd1089684c8f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.963471 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c7db4b498-hshbj" event={"ID":"1d8a9a27-c894-4a70-9a9b-bd1089684c8f","Type":"ContainerDied","Data":"2cdc07fa3e791a8367ffce04417b92aedcde12d37b56713ec2fe7e91ea072a9b"} Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.963789 4727 scope.go:117] "RemoveContainer" containerID="3767758981b1f2c3c6ac8581ed85fc6c4495d1743787552535da663d1d931824" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.963488 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c7db4b498-hshbj" Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.970036 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" event={"ID":"99f58971-6e29-4870-b6b4-4768e2676799","Type":"ContainerDied","Data":"ebf2705fb13ad8fea372f817918d2ae2ca6c180c34ce340d8c604fedae7f2b0c"} Nov 21 20:28:45 crc kubenswrapper[4727]: I1121 20:28:45.970091 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-847b7c5b5b-jfkv4" Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.004610 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c7db4b498-hshbj"] Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.016052 4727 scope.go:117] "RemoveContainer" containerID="cad46f15935d60f3a09d6e195e6a51d044f778bfd2cc3780661418ad8982adeb" Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.017671 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6c7db4b498-hshbj"] Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.032493 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-847b7c5b5b-jfkv4"] Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.042784 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-847b7c5b5b-jfkv4"] Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.987551 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410a7b85-5b1e-40b6-87b4-22af86338c90","Type":"ContainerStarted","Data":"49ed8abb3dc6e496fa2541e5f60e3ba1a38f44b4ab47e891a9fcfd49d3590916"} Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.987914 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="ceilometer-central-agent" containerID="cri-o://346e6ca505dcfef358a1415ce01e976aaa495d423df0a2aadc76cb082ee5a493" gracePeriod=30 Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.988080 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="proxy-httpd" containerID="cri-o://49ed8abb3dc6e496fa2541e5f60e3ba1a38f44b4ab47e891a9fcfd49d3590916" gracePeriod=30 Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.988142 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="sg-core" containerID="cri-o://26a7707016903bac495bb9720a0030211dbb0dea6f90951ca8c5a3c20d7f2796" gracePeriod=30 Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.988185 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="ceilometer-notification-agent" containerID="cri-o://dafc1bc2d86fb4b96cafb014c321c5e86c507a081527c5ddbe8a948a42bc0dad" gracePeriod=30 Nov 21 20:28:46 crc kubenswrapper[4727]: I1121 20:28:46.987930 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 20:28:47 crc kubenswrapper[4727]: I1121 20:28:47.025896 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.926209281 podStartE2EDuration="11.025874043s" podCreationTimestamp="2025-11-21 20:28:36 +0000 UTC" firstStartedPulling="2025-11-21 20:28:37.774248716 +0000 UTC m=+1322.960433760" lastFinishedPulling="2025-11-21 20:28:45.873913488 +0000 UTC m=+1331.060098522" observedRunningTime="2025-11-21 20:28:47.024084679 +0000 UTC m=+1332.210269733" watchObservedRunningTime="2025-11-21 20:28:47.025874043 +0000 UTC m=+1332.212059087" Nov 21 20:28:47 crc kubenswrapper[4727]: I1121 20:28:47.516777 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" path="/var/lib/kubelet/pods/1d8a9a27-c894-4a70-9a9b-bd1089684c8f/volumes" Nov 21 20:28:47 crc kubenswrapper[4727]: I1121 20:28:47.517883 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f58971-6e29-4870-b6b4-4768e2676799" path="/var/lib/kubelet/pods/99f58971-6e29-4870-b6b4-4768e2676799/volumes" Nov 21 20:28:48 crc kubenswrapper[4727]: I1121 20:28:48.000208 4727 generic.go:334] "Generic (PLEG): container finished" podID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerID="49ed8abb3dc6e496fa2541e5f60e3ba1a38f44b4ab47e891a9fcfd49d3590916" exitCode=0 Nov 21 20:28:48 crc kubenswrapper[4727]: I1121 20:28:48.000237 4727 generic.go:334] "Generic (PLEG): container finished" podID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerID="26a7707016903bac495bb9720a0030211dbb0dea6f90951ca8c5a3c20d7f2796" exitCode=2 Nov 21 20:28:48 crc kubenswrapper[4727]: I1121 20:28:48.000247 4727 generic.go:334] "Generic (PLEG): container finished" podID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerID="dafc1bc2d86fb4b96cafb014c321c5e86c507a081527c5ddbe8a948a42bc0dad" exitCode=0 Nov 21 20:28:48 crc kubenswrapper[4727]: I1121 20:28:48.000280 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410a7b85-5b1e-40b6-87b4-22af86338c90","Type":"ContainerDied","Data":"49ed8abb3dc6e496fa2541e5f60e3ba1a38f44b4ab47e891a9fcfd49d3590916"} Nov 21 20:28:48 crc kubenswrapper[4727]: I1121 20:28:48.000306 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410a7b85-5b1e-40b6-87b4-22af86338c90","Type":"ContainerDied","Data":"26a7707016903bac495bb9720a0030211dbb0dea6f90951ca8c5a3c20d7f2796"} Nov 21 20:28:48 crc kubenswrapper[4727]: I1121 20:28:48.000317 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410a7b85-5b1e-40b6-87b4-22af86338c90","Type":"ContainerDied","Data":"dafc1bc2d86fb4b96cafb014c321c5e86c507a081527c5ddbe8a948a42bc0dad"} Nov 21 20:28:49 crc kubenswrapper[4727]: I1121 20:28:49.839144 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 20:28:49 crc kubenswrapper[4727]: I1121 20:28:49.899420 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6d6ff87d5b-5qckr"] Nov 21 20:28:49 crc kubenswrapper[4727]: I1121 20:28:49.900723 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6d6ff87d5b-5qckr" podUID="d85cee99-5eae-4395-b2a7-733aa041212f" containerName="heat-engine" containerID="cri-o://d5fc8a3bf4c44deaa2172e31e0de9eedd3c94b4a91d6d07922fea7a09b302371" gracePeriod=60 Nov 21 20:28:50 crc kubenswrapper[4727]: E1121 20:28:50.944745 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5fc8a3bf4c44deaa2172e31e0de9eedd3c94b4a91d6d07922fea7a09b302371" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 20:28:50 crc kubenswrapper[4727]: E1121 20:28:50.946005 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5fc8a3bf4c44deaa2172e31e0de9eedd3c94b4a91d6d07922fea7a09b302371" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 20:28:50 crc kubenswrapper[4727]: E1121 20:28:50.947485 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5fc8a3bf4c44deaa2172e31e0de9eedd3c94b4a91d6d07922fea7a09b302371" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 20:28:50 crc kubenswrapper[4727]: E1121 20:28:50.947541 4727 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6d6ff87d5b-5qckr" podUID="d85cee99-5eae-4395-b2a7-733aa041212f" containerName="heat-engine" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.087368 4727 generic.go:334] "Generic (PLEG): container finished" podID="7ff94a3a-a5ae-42b5-a316-e536cf0d3eda" containerID="cebacfb63f66e9a6c0a415ccde1c9322911f3be1572378c0801d63bbe3e284a4" exitCode=0 Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.087464 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qh8s2" event={"ID":"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda","Type":"ContainerDied","Data":"cebacfb63f66e9a6c0a415ccde1c9322911f3be1572378c0801d63bbe3e284a4"} Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.091324 4727 generic.go:334] "Generic (PLEG): container finished" podID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerID="346e6ca505dcfef358a1415ce01e976aaa495d423df0a2aadc76cb082ee5a493" exitCode=0 Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.091376 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410a7b85-5b1e-40b6-87b4-22af86338c90","Type":"ContainerDied","Data":"346e6ca505dcfef358a1415ce01e976aaa495d423df0a2aadc76cb082ee5a493"} Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.394402 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.524875 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kpt2\" (UniqueName: \"kubernetes.io/projected/410a7b85-5b1e-40b6-87b4-22af86338c90-kube-api-access-6kpt2\") pod \"410a7b85-5b1e-40b6-87b4-22af86338c90\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.525941 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-config-data\") pod \"410a7b85-5b1e-40b6-87b4-22af86338c90\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.526118 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-scripts\") pod \"410a7b85-5b1e-40b6-87b4-22af86338c90\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.526140 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-log-httpd\") pod \"410a7b85-5b1e-40b6-87b4-22af86338c90\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.526591 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-combined-ca-bundle\") pod \"410a7b85-5b1e-40b6-87b4-22af86338c90\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.526601 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "410a7b85-5b1e-40b6-87b4-22af86338c90" (UID: "410a7b85-5b1e-40b6-87b4-22af86338c90"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.526645 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-sg-core-conf-yaml\") pod \"410a7b85-5b1e-40b6-87b4-22af86338c90\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.526726 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-run-httpd\") pod \"410a7b85-5b1e-40b6-87b4-22af86338c90\" (UID: \"410a7b85-5b1e-40b6-87b4-22af86338c90\") " Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.527184 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.527376 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "410a7b85-5b1e-40b6-87b4-22af86338c90" (UID: "410a7b85-5b1e-40b6-87b4-22af86338c90"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.539172 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-scripts" (OuterVolumeSpecName: "scripts") pod "410a7b85-5b1e-40b6-87b4-22af86338c90" (UID: "410a7b85-5b1e-40b6-87b4-22af86338c90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.551617 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410a7b85-5b1e-40b6-87b4-22af86338c90-kube-api-access-6kpt2" (OuterVolumeSpecName: "kube-api-access-6kpt2") pod "410a7b85-5b1e-40b6-87b4-22af86338c90" (UID: "410a7b85-5b1e-40b6-87b4-22af86338c90"). InnerVolumeSpecName "kube-api-access-6kpt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.564246 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "410a7b85-5b1e-40b6-87b4-22af86338c90" (UID: "410a7b85-5b1e-40b6-87b4-22af86338c90"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.626981 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "410a7b85-5b1e-40b6-87b4-22af86338c90" (UID: "410a7b85-5b1e-40b6-87b4-22af86338c90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.629647 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.629684 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.629699 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410a7b85-5b1e-40b6-87b4-22af86338c90-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.629711 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kpt2\" (UniqueName: \"kubernetes.io/projected/410a7b85-5b1e-40b6-87b4-22af86338c90-kube-api-access-6kpt2\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.629723 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.657540 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-config-data" (OuterVolumeSpecName: "config-data") pod "410a7b85-5b1e-40b6-87b4-22af86338c90" (UID: "410a7b85-5b1e-40b6-87b4-22af86338c90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:55 crc kubenswrapper[4727]: I1121 20:28:55.731939 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410a7b85-5b1e-40b6-87b4-22af86338c90-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.102968 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410a7b85-5b1e-40b6-87b4-22af86338c90","Type":"ContainerDied","Data":"585e5f39b22cca737309495608be310d09606676e4e2490c40d90fe3c43555d1"} Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.103002 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.103036 4727 scope.go:117] "RemoveContainer" containerID="49ed8abb3dc6e496fa2541e5f60e3ba1a38f44b4ab47e891a9fcfd49d3590916" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.152877 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.161822 4727 scope.go:117] "RemoveContainer" containerID="26a7707016903bac495bb9720a0030211dbb0dea6f90951ca8c5a3c20d7f2796" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.164482 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.196384 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:56 crc kubenswrapper[4727]: E1121 20:28:56.196888 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" containerName="heat-api" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.196911 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" containerName="heat-api" Nov 21 20:28:56 crc kubenswrapper[4727]: E1121 20:28:56.196929 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="ceilometer-notification-agent" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.196938 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="ceilometer-notification-agent" Nov 21 20:28:56 crc kubenswrapper[4727]: E1121 20:28:56.196974 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="proxy-httpd" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.196984 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="proxy-httpd" Nov 21 20:28:56 crc kubenswrapper[4727]: E1121 20:28:56.197001 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="sg-core" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197008 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="sg-core" Nov 21 20:28:56 crc kubenswrapper[4727]: E1121 20:28:56.197031 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="ceilometer-central-agent" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197038 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="ceilometer-central-agent" Nov 21 20:28:56 crc kubenswrapper[4727]: E1121 20:28:56.197047 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f22c01e-e191-42ab-8e0e-f678abc961b2" containerName="heat-cfnapi" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197057 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f22c01e-e191-42ab-8e0e-f678abc961b2" containerName="heat-cfnapi" Nov 21 20:28:56 crc kubenswrapper[4727]: E1121 20:28:56.197065 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" containerName="heat-api" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197071 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" containerName="heat-api" Nov 21 20:28:56 crc kubenswrapper[4727]: E1121 20:28:56.197082 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f58971-6e29-4870-b6b4-4768e2676799" containerName="heat-cfnapi" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197088 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f58971-6e29-4870-b6b4-4768e2676799" containerName="heat-cfnapi" Nov 21 20:28:56 crc kubenswrapper[4727]: E1121 20:28:56.197123 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f58971-6e29-4870-b6b4-4768e2676799" containerName="heat-cfnapi" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197130 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f58971-6e29-4870-b6b4-4768e2676799" containerName="heat-cfnapi" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197369 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="sg-core" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197384 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" containerName="heat-api" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197394 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8a9a27-c894-4a70-9a9b-bd1089684c8f" containerName="heat-api" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197402 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="ceilometer-central-agent" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197415 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f58971-6e29-4870-b6b4-4768e2676799" containerName="heat-cfnapi" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197423 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="ceilometer-notification-agent" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197435 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" containerName="proxy-httpd" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197448 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f58971-6e29-4870-b6b4-4768e2676799" containerName="heat-cfnapi" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.197459 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f22c01e-e191-42ab-8e0e-f678abc961b2" containerName="heat-cfnapi" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.199679 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.203207 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.203227 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.208907 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.210519 4727 scope.go:117] "RemoveContainer" containerID="dafc1bc2d86fb4b96cafb014c321c5e86c507a081527c5ddbe8a948a42bc0dad" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.254152 4727 scope.go:117] "RemoveContainer" containerID="346e6ca505dcfef358a1415ce01e976aaa495d423df0a2aadc76cb082ee5a493" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.349811 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-scripts\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.349865 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-run-httpd\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.350033 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-config-data\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.350084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.350168 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.350287 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h877t\" (UniqueName: \"kubernetes.io/projected/2a715021-194c-4f1e-9de8-00113677ca48-kube-api-access-h877t\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.350442 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-log-httpd\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.452487 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.452795 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.452861 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h877t\" (UniqueName: \"kubernetes.io/projected/2a715021-194c-4f1e-9de8-00113677ca48-kube-api-access-h877t\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.452901 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-log-httpd\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.453002 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-scripts\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.453028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-run-httpd\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.453099 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-config-data\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.453721 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-log-httpd\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.454323 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-run-httpd\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.458515 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.459203 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.459218 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-scripts\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.459738 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-config-data\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.471798 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h877t\" (UniqueName: \"kubernetes.io/projected/2a715021-194c-4f1e-9de8-00113677ca48-kube-api-access-h877t\") pod \"ceilometer-0\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.532367 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.654473 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.760736 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-combined-ca-bundle\") pod \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.761009 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-scripts\") pod \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.761146 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-config-data\") pod \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.761277 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2vvz\" (UniqueName: \"kubernetes.io/projected/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-kube-api-access-x2vvz\") pod \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\" (UID: \"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda\") " Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.782295 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-kube-api-access-x2vvz" (OuterVolumeSpecName: "kube-api-access-x2vvz") pod "7ff94a3a-a5ae-42b5-a316-e536cf0d3eda" (UID: "7ff94a3a-a5ae-42b5-a316-e536cf0d3eda"). InnerVolumeSpecName "kube-api-access-x2vvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.787117 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-scripts" (OuterVolumeSpecName: "scripts") pod "7ff94a3a-a5ae-42b5-a316-e536cf0d3eda" (UID: "7ff94a3a-a5ae-42b5-a316-e536cf0d3eda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.820179 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ff94a3a-a5ae-42b5-a316-e536cf0d3eda" (UID: "7ff94a3a-a5ae-42b5-a316-e536cf0d3eda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.842050 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-config-data" (OuterVolumeSpecName: "config-data") pod "7ff94a3a-a5ae-42b5-a316-e536cf0d3eda" (UID: "7ff94a3a-a5ae-42b5-a316-e536cf0d3eda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.863734 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2vvz\" (UniqueName: \"kubernetes.io/projected/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-kube-api-access-x2vvz\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.863763 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.863775 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:56 crc kubenswrapper[4727]: I1121 20:28:56.863784 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.108274 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.136536 4727 generic.go:334] "Generic (PLEG): container finished" podID="d85cee99-5eae-4395-b2a7-733aa041212f" containerID="d5fc8a3bf4c44deaa2172e31e0de9eedd3c94b4a91d6d07922fea7a09b302371" exitCode=0 Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.136635 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d6ff87d5b-5qckr" event={"ID":"d85cee99-5eae-4395-b2a7-733aa041212f","Type":"ContainerDied","Data":"d5fc8a3bf4c44deaa2172e31e0de9eedd3c94b4a91d6d07922fea7a09b302371"} Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.138150 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qh8s2" event={"ID":"7ff94a3a-a5ae-42b5-a316-e536cf0d3eda","Type":"ContainerDied","Data":"629e02fe93762310fd73cb80a0a3114ee3d91612d5cba5ab98e15c0acb26ed03"} Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.141197 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="629e02fe93762310fd73cb80a0a3114ee3d91612d5cba5ab98e15c0acb26ed03" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.141422 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qh8s2" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.239421 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 20:28:57 crc kubenswrapper[4727]: E1121 20:28:57.240025 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff94a3a-a5ae-42b5-a316-e536cf0d3eda" containerName="nova-cell0-conductor-db-sync" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.240049 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff94a3a-a5ae-42b5-a316-e536cf0d3eda" containerName="nova-cell0-conductor-db-sync" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.240301 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff94a3a-a5ae-42b5-a316-e536cf0d3eda" containerName="nova-cell0-conductor-db-sync" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.241256 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.243029 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.244126 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nnnnw" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.269181 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.295270 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.373876 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data\") pod \"d85cee99-5eae-4395-b2a7-733aa041212f\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.373991 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data-custom\") pod \"d85cee99-5eae-4395-b2a7-733aa041212f\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.374034 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4v66\" (UniqueName: \"kubernetes.io/projected/d85cee99-5eae-4395-b2a7-733aa041212f-kube-api-access-v4v66\") pod \"d85cee99-5eae-4395-b2a7-733aa041212f\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.374147 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-combined-ca-bundle\") pod \"d85cee99-5eae-4395-b2a7-733aa041212f\" (UID: \"d85cee99-5eae-4395-b2a7-733aa041212f\") " Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.374568 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb656\" (UniqueName: \"kubernetes.io/projected/5d4eb186-2b59-4b16-bd67-0b9f64c233a6-kube-api-access-zb656\") pod \"nova-cell0-conductor-0\" (UID: \"5d4eb186-2b59-4b16-bd67-0b9f64c233a6\") " pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.374604 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4eb186-2b59-4b16-bd67-0b9f64c233a6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5d4eb186-2b59-4b16-bd67-0b9f64c233a6\") " pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.374671 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d4eb186-2b59-4b16-bd67-0b9f64c233a6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5d4eb186-2b59-4b16-bd67-0b9f64c233a6\") " pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.379240 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85cee99-5eae-4395-b2a7-733aa041212f-kube-api-access-v4v66" (OuterVolumeSpecName: "kube-api-access-v4v66") pod "d85cee99-5eae-4395-b2a7-733aa041212f" (UID: "d85cee99-5eae-4395-b2a7-733aa041212f"). InnerVolumeSpecName "kube-api-access-v4v66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.387247 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d85cee99-5eae-4395-b2a7-733aa041212f" (UID: "d85cee99-5eae-4395-b2a7-733aa041212f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.419774 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d85cee99-5eae-4395-b2a7-733aa041212f" (UID: "d85cee99-5eae-4395-b2a7-733aa041212f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.450267 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data" (OuterVolumeSpecName: "config-data") pod "d85cee99-5eae-4395-b2a7-733aa041212f" (UID: "d85cee99-5eae-4395-b2a7-733aa041212f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.477114 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb656\" (UniqueName: \"kubernetes.io/projected/5d4eb186-2b59-4b16-bd67-0b9f64c233a6-kube-api-access-zb656\") pod \"nova-cell0-conductor-0\" (UID: \"5d4eb186-2b59-4b16-bd67-0b9f64c233a6\") " pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.477182 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4eb186-2b59-4b16-bd67-0b9f64c233a6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5d4eb186-2b59-4b16-bd67-0b9f64c233a6\") " pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.477274 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d4eb186-2b59-4b16-bd67-0b9f64c233a6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5d4eb186-2b59-4b16-bd67-0b9f64c233a6\") " pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.477356 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.477374 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.477388 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4v66\" (UniqueName: \"kubernetes.io/projected/d85cee99-5eae-4395-b2a7-733aa041212f-kube-api-access-v4v66\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.477400 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85cee99-5eae-4395-b2a7-733aa041212f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.481287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d4eb186-2b59-4b16-bd67-0b9f64c233a6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5d4eb186-2b59-4b16-bd67-0b9f64c233a6\") " pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.483498 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4eb186-2b59-4b16-bd67-0b9f64c233a6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5d4eb186-2b59-4b16-bd67-0b9f64c233a6\") " pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.500538 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb656\" (UniqueName: \"kubernetes.io/projected/5d4eb186-2b59-4b16-bd67-0b9f64c233a6-kube-api-access-zb656\") pod \"nova-cell0-conductor-0\" (UID: \"5d4eb186-2b59-4b16-bd67-0b9f64c233a6\") " pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.512814 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410a7b85-5b1e-40b6-87b4-22af86338c90" path="/var/lib/kubelet/pods/410a7b85-5b1e-40b6-87b4-22af86338c90/volumes" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.610792 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.800635 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-4chdx"] Nov 21 20:28:57 crc kubenswrapper[4727]: E1121 20:28:57.801571 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85cee99-5eae-4395-b2a7-733aa041212f" containerName="heat-engine" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.801590 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85cee99-5eae-4395-b2a7-733aa041212f" containerName="heat-engine" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.801846 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85cee99-5eae-4395-b2a7-733aa041212f" containerName="heat-engine" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.802873 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4chdx" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.823783 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-4chdx"] Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.835375 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-93ab-account-create-j9jlk"] Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.837344 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-93ab-account-create-j9jlk" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.847323 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.846549 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-93ab-account-create-j9jlk"] Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.887971 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0721f70-0de7-476d-8c01-6add6d0767e6-operator-scripts\") pod \"aodh-93ab-account-create-j9jlk\" (UID: \"b0721f70-0de7-476d-8c01-6add6d0767e6\") " pod="openstack/aodh-93ab-account-create-j9jlk" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.889168 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zwxl\" (UniqueName: \"kubernetes.io/projected/da25dd6f-c2ed-4b21-b03b-89652deba65e-kube-api-access-4zwxl\") pod \"aodh-db-create-4chdx\" (UID: \"da25dd6f-c2ed-4b21-b03b-89652deba65e\") " pod="openstack/aodh-db-create-4chdx" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.889248 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da25dd6f-c2ed-4b21-b03b-89652deba65e-operator-scripts\") pod \"aodh-db-create-4chdx\" (UID: \"da25dd6f-c2ed-4b21-b03b-89652deba65e\") " pod="openstack/aodh-db-create-4chdx" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.889316 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7vgm\" (UniqueName: \"kubernetes.io/projected/b0721f70-0de7-476d-8c01-6add6d0767e6-kube-api-access-t7vgm\") pod \"aodh-93ab-account-create-j9jlk\" (UID: \"b0721f70-0de7-476d-8c01-6add6d0767e6\") " pod="openstack/aodh-93ab-account-create-j9jlk" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.991700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zwxl\" (UniqueName: \"kubernetes.io/projected/da25dd6f-c2ed-4b21-b03b-89652deba65e-kube-api-access-4zwxl\") pod \"aodh-db-create-4chdx\" (UID: \"da25dd6f-c2ed-4b21-b03b-89652deba65e\") " pod="openstack/aodh-db-create-4chdx" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.991875 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da25dd6f-c2ed-4b21-b03b-89652deba65e-operator-scripts\") pod \"aodh-db-create-4chdx\" (UID: \"da25dd6f-c2ed-4b21-b03b-89652deba65e\") " pod="openstack/aodh-db-create-4chdx" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.991966 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7vgm\" (UniqueName: \"kubernetes.io/projected/b0721f70-0de7-476d-8c01-6add6d0767e6-kube-api-access-t7vgm\") pod \"aodh-93ab-account-create-j9jlk\" (UID: \"b0721f70-0de7-476d-8c01-6add6d0767e6\") " pod="openstack/aodh-93ab-account-create-j9jlk" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.992322 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0721f70-0de7-476d-8c01-6add6d0767e6-operator-scripts\") pod \"aodh-93ab-account-create-j9jlk\" (UID: \"b0721f70-0de7-476d-8c01-6add6d0767e6\") " pod="openstack/aodh-93ab-account-create-j9jlk" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.993554 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0721f70-0de7-476d-8c01-6add6d0767e6-operator-scripts\") pod \"aodh-93ab-account-create-j9jlk\" (UID: \"b0721f70-0de7-476d-8c01-6add6d0767e6\") " pod="openstack/aodh-93ab-account-create-j9jlk" Nov 21 20:28:57 crc kubenswrapper[4727]: I1121 20:28:57.993600 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da25dd6f-c2ed-4b21-b03b-89652deba65e-operator-scripts\") pod \"aodh-db-create-4chdx\" (UID: \"da25dd6f-c2ed-4b21-b03b-89652deba65e\") " pod="openstack/aodh-db-create-4chdx" Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.016403 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7vgm\" (UniqueName: \"kubernetes.io/projected/b0721f70-0de7-476d-8c01-6add6d0767e6-kube-api-access-t7vgm\") pod \"aodh-93ab-account-create-j9jlk\" (UID: \"b0721f70-0de7-476d-8c01-6add6d0767e6\") " pod="openstack/aodh-93ab-account-create-j9jlk" Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.022057 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zwxl\" (UniqueName: \"kubernetes.io/projected/da25dd6f-c2ed-4b21-b03b-89652deba65e-kube-api-access-4zwxl\") pod \"aodh-db-create-4chdx\" (UID: \"da25dd6f-c2ed-4b21-b03b-89652deba65e\") " pod="openstack/aodh-db-create-4chdx" Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.158946 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d6ff87d5b-5qckr" event={"ID":"d85cee99-5eae-4395-b2a7-733aa041212f","Type":"ContainerDied","Data":"f265d32556b97cf944aced070ade8d94a2f821aaa7d9c21beb91b5560edd1b43"} Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.159317 4727 scope.go:117] "RemoveContainer" containerID="d5fc8a3bf4c44deaa2172e31e0de9eedd3c94b4a91d6d07922fea7a09b302371" Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.159003 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d6ff87d5b-5qckr" Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.161786 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a715021-194c-4f1e-9de8-00113677ca48","Type":"ContainerStarted","Data":"b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184"} Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.161855 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a715021-194c-4f1e-9de8-00113677ca48","Type":"ContainerStarted","Data":"dc0bec0d2463912a07b37eee52037ddc6a8f1e1bf7146d106a4de87680e2813e"} Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.167544 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4chdx" Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.176208 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-93ab-account-create-j9jlk" Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.201891 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6d6ff87d5b-5qckr"] Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.226042 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6d6ff87d5b-5qckr"] Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.244363 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 20:28:58 crc kubenswrapper[4727]: I1121 20:28:58.937128 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-93ab-account-create-j9jlk"] Nov 21 20:28:59 crc kubenswrapper[4727]: I1121 20:28:59.172777 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-4chdx"] Nov 21 20:28:59 crc kubenswrapper[4727]: I1121 20:28:59.201918 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5d4eb186-2b59-4b16-bd67-0b9f64c233a6","Type":"ContainerStarted","Data":"d9973161cffeb95a7e7909521d7c73a874e16139ded65929e5ff96194dc563e6"} Nov 21 20:28:59 crc kubenswrapper[4727]: I1121 20:28:59.202155 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5d4eb186-2b59-4b16-bd67-0b9f64c233a6","Type":"ContainerStarted","Data":"1adf415f8f9cf2331b409ed05580c173a2baeb07aed84b9058f3558466dd8fb7"} Nov 21 20:28:59 crc kubenswrapper[4727]: I1121 20:28:59.203597 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 21 20:28:59 crc kubenswrapper[4727]: I1121 20:28:59.221137 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a715021-194c-4f1e-9de8-00113677ca48","Type":"ContainerStarted","Data":"ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018"} Nov 21 20:28:59 crc kubenswrapper[4727]: I1121 20:28:59.225632 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4chdx" event={"ID":"da25dd6f-c2ed-4b21-b03b-89652deba65e","Type":"ContainerStarted","Data":"4c89abf5ef49083084f08478d66f1f4dd53ef3e6067d182369683b2394801389"} Nov 21 20:28:59 crc kubenswrapper[4727]: I1121 20:28:59.227589 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.22757506 podStartE2EDuration="2.22757506s" podCreationTimestamp="2025-11-21 20:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:28:59.223280406 +0000 UTC m=+1344.409465450" watchObservedRunningTime="2025-11-21 20:28:59.22757506 +0000 UTC m=+1344.413760104" Nov 21 20:28:59 crc kubenswrapper[4727]: I1121 20:28:59.235382 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-93ab-account-create-j9jlk" event={"ID":"b0721f70-0de7-476d-8c01-6add6d0767e6","Type":"ContainerStarted","Data":"358334f33926a9f5fd9ed32aacd1999f5f8372939cdcc5d69c1e6e131bb0a94c"} Nov 21 20:28:59 crc kubenswrapper[4727]: I1121 20:28:59.529075 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85cee99-5eae-4395-b2a7-733aa041212f" path="/var/lib/kubelet/pods/d85cee99-5eae-4395-b2a7-733aa041212f/volumes" Nov 21 20:29:00 crc kubenswrapper[4727]: I1121 20:29:00.247849 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a715021-194c-4f1e-9de8-00113677ca48","Type":"ContainerStarted","Data":"b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad"} Nov 21 20:29:00 crc kubenswrapper[4727]: I1121 20:29:00.250120 4727 generic.go:334] "Generic (PLEG): container finished" podID="b0721f70-0de7-476d-8c01-6add6d0767e6" containerID="fffad2d770048435343fd82a808677fb96ffe494032222d67fd5a582d6ad8bbd" exitCode=0 Nov 21 20:29:00 crc kubenswrapper[4727]: I1121 20:29:00.250181 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-93ab-account-create-j9jlk" event={"ID":"b0721f70-0de7-476d-8c01-6add6d0767e6","Type":"ContainerDied","Data":"fffad2d770048435343fd82a808677fb96ffe494032222d67fd5a582d6ad8bbd"} Nov 21 20:29:00 crc kubenswrapper[4727]: I1121 20:29:00.252153 4727 generic.go:334] "Generic (PLEG): container finished" podID="da25dd6f-c2ed-4b21-b03b-89652deba65e" containerID="7c7a8e4daeeae261705248452ccd92d646faeb2c2ee62f5ac40d365ae6b50dc9" exitCode=0 Nov 21 20:29:00 crc kubenswrapper[4727]: I1121 20:29:00.253149 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4chdx" event={"ID":"da25dd6f-c2ed-4b21-b03b-89652deba65e","Type":"ContainerDied","Data":"7c7a8e4daeeae261705248452ccd92d646faeb2c2ee62f5ac40d365ae6b50dc9"} Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.265733 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a715021-194c-4f1e-9de8-00113677ca48","Type":"ContainerStarted","Data":"c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b"} Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.290469 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.4957195890000001 podStartE2EDuration="5.29045364s" podCreationTimestamp="2025-11-21 20:28:56 +0000 UTC" firstStartedPulling="2025-11-21 20:28:57.11956074 +0000 UTC m=+1342.305745784" lastFinishedPulling="2025-11-21 20:29:00.914294781 +0000 UTC m=+1346.100479835" observedRunningTime="2025-11-21 20:29:01.288538843 +0000 UTC m=+1346.474723887" watchObservedRunningTime="2025-11-21 20:29:01.29045364 +0000 UTC m=+1346.476638684" Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.887297 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4chdx" Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.888559 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-93ab-account-create-j9jlk" Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.927249 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zwxl\" (UniqueName: \"kubernetes.io/projected/da25dd6f-c2ed-4b21-b03b-89652deba65e-kube-api-access-4zwxl\") pod \"da25dd6f-c2ed-4b21-b03b-89652deba65e\" (UID: \"da25dd6f-c2ed-4b21-b03b-89652deba65e\") " Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.927767 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0721f70-0de7-476d-8c01-6add6d0767e6-operator-scripts\") pod \"b0721f70-0de7-476d-8c01-6add6d0767e6\" (UID: \"b0721f70-0de7-476d-8c01-6add6d0767e6\") " Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.927907 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da25dd6f-c2ed-4b21-b03b-89652deba65e-operator-scripts\") pod \"da25dd6f-c2ed-4b21-b03b-89652deba65e\" (UID: \"da25dd6f-c2ed-4b21-b03b-89652deba65e\") " Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.928109 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7vgm\" (UniqueName: \"kubernetes.io/projected/b0721f70-0de7-476d-8c01-6add6d0767e6-kube-api-access-t7vgm\") pod \"b0721f70-0de7-476d-8c01-6add6d0767e6\" (UID: \"b0721f70-0de7-476d-8c01-6add6d0767e6\") " Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.928723 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0721f70-0de7-476d-8c01-6add6d0767e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0721f70-0de7-476d-8c01-6add6d0767e6" (UID: "b0721f70-0de7-476d-8c01-6add6d0767e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.929014 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da25dd6f-c2ed-4b21-b03b-89652deba65e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da25dd6f-c2ed-4b21-b03b-89652deba65e" (UID: "da25dd6f-c2ed-4b21-b03b-89652deba65e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.929226 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0721f70-0de7-476d-8c01-6add6d0767e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.929310 4727 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da25dd6f-c2ed-4b21-b03b-89652deba65e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.938084 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da25dd6f-c2ed-4b21-b03b-89652deba65e-kube-api-access-4zwxl" (OuterVolumeSpecName: "kube-api-access-4zwxl") pod "da25dd6f-c2ed-4b21-b03b-89652deba65e" (UID: "da25dd6f-c2ed-4b21-b03b-89652deba65e"). InnerVolumeSpecName "kube-api-access-4zwxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:01 crc kubenswrapper[4727]: I1121 20:29:01.952455 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0721f70-0de7-476d-8c01-6add6d0767e6-kube-api-access-t7vgm" (OuterVolumeSpecName: "kube-api-access-t7vgm") pod "b0721f70-0de7-476d-8c01-6add6d0767e6" (UID: "b0721f70-0de7-476d-8c01-6add6d0767e6"). InnerVolumeSpecName "kube-api-access-t7vgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:02 crc kubenswrapper[4727]: I1121 20:29:02.031892 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zwxl\" (UniqueName: \"kubernetes.io/projected/da25dd6f-c2ed-4b21-b03b-89652deba65e-kube-api-access-4zwxl\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:02 crc kubenswrapper[4727]: I1121 20:29:02.031934 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7vgm\" (UniqueName: \"kubernetes.io/projected/b0721f70-0de7-476d-8c01-6add6d0767e6-kube-api-access-t7vgm\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:02 crc kubenswrapper[4727]: I1121 20:29:02.276698 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-93ab-account-create-j9jlk" Nov 21 20:29:02 crc kubenswrapper[4727]: I1121 20:29:02.276834 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-93ab-account-create-j9jlk" event={"ID":"b0721f70-0de7-476d-8c01-6add6d0767e6","Type":"ContainerDied","Data":"358334f33926a9f5fd9ed32aacd1999f5f8372939cdcc5d69c1e6e131bb0a94c"} Nov 21 20:29:02 crc kubenswrapper[4727]: I1121 20:29:02.276877 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="358334f33926a9f5fd9ed32aacd1999f5f8372939cdcc5d69c1e6e131bb0a94c" Nov 21 20:29:02 crc kubenswrapper[4727]: I1121 20:29:02.280929 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4chdx" event={"ID":"da25dd6f-c2ed-4b21-b03b-89652deba65e","Type":"ContainerDied","Data":"4c89abf5ef49083084f08478d66f1f4dd53ef3e6067d182369683b2394801389"} Nov 21 20:29:02 crc kubenswrapper[4727]: I1121 20:29:02.280977 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c89abf5ef49083084f08478d66f1f4dd53ef3e6067d182369683b2394801389" Nov 21 20:29:02 crc kubenswrapper[4727]: I1121 20:29:02.281036 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4chdx" Nov 21 20:29:02 crc kubenswrapper[4727]: I1121 20:29:02.281216 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 20:29:07 crc kubenswrapper[4727]: I1121 20:29:07.637929 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.243255 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-scscp"] Nov 21 20:29:08 crc kubenswrapper[4727]: E1121 20:29:08.244047 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0721f70-0de7-476d-8c01-6add6d0767e6" containerName="mariadb-account-create" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.244062 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0721f70-0de7-476d-8c01-6add6d0767e6" containerName="mariadb-account-create" Nov 21 20:29:08 crc kubenswrapper[4727]: E1121 20:29:08.244095 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da25dd6f-c2ed-4b21-b03b-89652deba65e" containerName="mariadb-database-create" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.244103 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="da25dd6f-c2ed-4b21-b03b-89652deba65e" containerName="mariadb-database-create" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.244333 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0721f70-0de7-476d-8c01-6add6d0767e6" containerName="mariadb-account-create" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.244347 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="da25dd6f-c2ed-4b21-b03b-89652deba65e" containerName="mariadb-database-create" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.245177 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.250491 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.250505 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5w2vq" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.250691 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.250981 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.258297 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-scscp"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.295277 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ncgt2"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.295296 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-config-data\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.295347 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-combined-ca-bundle\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.295406 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-scripts\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.295609 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms79z\" (UniqueName: \"kubernetes.io/projected/218dc4f4-1ad8-4106-a954-73e6b2e7359f-kube-api-access-ms79z\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.296922 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.299469 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.299658 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.310268 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ncgt2"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.397994 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms79z\" (UniqueName: \"kubernetes.io/projected/218dc4f4-1ad8-4106-a954-73e6b2e7359f-kube-api-access-ms79z\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.398194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-config-data\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.398235 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-scripts\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.398283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-combined-ca-bundle\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.398375 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l4vt\" (UniqueName: \"kubernetes.io/projected/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-kube-api-access-7l4vt\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.398417 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-scripts\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.398440 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-config-data\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.398533 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.406318 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-config-data\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.411565 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-combined-ca-bundle\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.423804 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-scripts\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.435585 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms79z\" (UniqueName: \"kubernetes.io/projected/218dc4f4-1ad8-4106-a954-73e6b2e7359f-kube-api-access-ms79z\") pod \"aodh-db-sync-scscp\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.470991 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.472569 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.478119 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.501321 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.501388 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-config-data\") pod \"nova-scheduler-0\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.501593 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-scripts\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.501657 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l4vt\" (UniqueName: \"kubernetes.io/projected/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-kube-api-access-7l4vt\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.501681 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-config-data\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.501735 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.501795 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrwg\" (UniqueName: \"kubernetes.io/projected/00455947-f2da-4823-8bcc-8a589f1f475b-kube-api-access-wmrwg\") pod \"nova-scheduler-0\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.519617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.520458 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-config-data\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.520582 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-scripts\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.547811 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l4vt\" (UniqueName: \"kubernetes.io/projected/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-kube-api-access-7l4vt\") pod \"nova-cell0-cell-mapping-ncgt2\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.566330 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.573602 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.604236 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrwg\" (UniqueName: \"kubernetes.io/projected/00455947-f2da-4823-8bcc-8a589f1f475b-kube-api-access-wmrwg\") pod \"nova-scheduler-0\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.604305 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.604335 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-config-data\") pod \"nova-scheduler-0\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.614487 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.620100 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-config-data\") pod \"nova-scheduler-0\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.632675 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.634882 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.637319 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.654483 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.666903 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.669570 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.677283 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.680620 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.702642 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrwg\" (UniqueName: \"kubernetes.io/projected/00455947-f2da-4823-8bcc-8a589f1f475b-kube-api-access-wmrwg\") pod \"nova-scheduler-0\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.708588 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-config-data\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.708667 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.708705 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8205686-4928-4b1b-a38c-7c532510b82a-logs\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.708765 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqfv\" (UniqueName: \"kubernetes.io/projected/b8205686-4928-4b1b-a38c-7c532510b82a-kube-api-access-nhqfv\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.708850 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.708901 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-config-data\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.708979 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19bf42d-1611-4cff-a0a0-ea1e69019397-logs\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.709042 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsjph\" (UniqueName: \"kubernetes.io/projected/b19bf42d-1611-4cff-a0a0-ea1e69019397-kube-api-access-dsjph\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.711466 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.804305 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.805859 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.812035 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.813767 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19bf42d-1611-4cff-a0a0-ea1e69019397-logs\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.813836 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsjph\" (UniqueName: \"kubernetes.io/projected/b19bf42d-1611-4cff-a0a0-ea1e69019397-kube-api-access-dsjph\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.813870 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-config-data\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.813913 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.813951 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8205686-4928-4b1b-a38c-7c532510b82a-logs\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.814015 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhqfv\" (UniqueName: \"kubernetes.io/projected/b8205686-4928-4b1b-a38c-7c532510b82a-kube-api-access-nhqfv\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.814075 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.814112 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-config-data\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.815471 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19bf42d-1611-4cff-a0a0-ea1e69019397-logs\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.817082 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8205686-4928-4b1b-a38c-7c532510b82a-logs\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.819611 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.823827 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.831629 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-config-data\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.832171 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-config-data\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.860514 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-6gwj6"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.866853 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhqfv\" (UniqueName: \"kubernetes.io/projected/b8205686-4928-4b1b-a38c-7c532510b82a-kube-api-access-nhqfv\") pod \"nova-metadata-0\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " pod="openstack/nova-metadata-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.883384 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsjph\" (UniqueName: \"kubernetes.io/projected/b19bf42d-1611-4cff-a0a0-ea1e69019397-kube-api-access-dsjph\") pod \"nova-api-0\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " pod="openstack/nova-api-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.883401 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.895886 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.918062 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.918317 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.918348 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5z7\" (UniqueName: \"kubernetes.io/projected/60ea202f-6cf5-4e18-b267-1ea18cd187fd-kube-api-access-xm5z7\") pod \"nova-cell1-novncproxy-0\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.918369 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.918415 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.918477 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.918514 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-config\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.918639 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz26m\" (UniqueName: \"kubernetes.io/projected/4ca9df53-c075-43ff-9f60-c3533e94e265-kube-api-access-pz26m\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.918797 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.977044 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-6gwj6"] Nov 21 20:29:08 crc kubenswrapper[4727]: I1121 20:29:08.998969 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.024623 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.024685 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.024806 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.024835 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.024860 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm5z7\" (UniqueName: \"kubernetes.io/projected/60ea202f-6cf5-4e18-b267-1ea18cd187fd-kube-api-access-xm5z7\") pod \"nova-cell1-novncproxy-0\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.024897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.024945 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.024996 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-config\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.025082 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz26m\" (UniqueName: \"kubernetes.io/projected/4ca9df53-c075-43ff-9f60-c3533e94e265-kube-api-access-pz26m\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.027046 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.027711 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.029069 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.030701 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.031888 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.032451 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-config\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.034174 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.039490 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.045943 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.058486 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm5z7\" (UniqueName: \"kubernetes.io/projected/60ea202f-6cf5-4e18-b267-1ea18cd187fd-kube-api-access-xm5z7\") pod \"nova-cell1-novncproxy-0\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.072924 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz26m\" (UniqueName: \"kubernetes.io/projected/4ca9df53-c075-43ff-9f60-c3533e94e265-kube-api-access-pz26m\") pod \"dnsmasq-dns-568d7fd7cf-6gwj6\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.163765 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.254157 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.363914 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ncgt2"] Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.647274 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-scscp"] Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.667376 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.774022 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4257c"] Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.780141 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.785188 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.785780 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.787761 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4257c"] Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.890163 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-scripts\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.890771 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-config-data\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.890862 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.890982 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9x8k\" (UniqueName: \"kubernetes.io/projected/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-kube-api-access-g9x8k\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.937784 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.993137 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9x8k\" (UniqueName: \"kubernetes.io/projected/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-kube-api-access-g9x8k\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.993182 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-scripts\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.993310 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-config-data\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:09 crc kubenswrapper[4727]: I1121 20:29:09.993364 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.005019 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-config-data\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.010235 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.010740 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-scripts\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.012071 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9x8k\" (UniqueName: \"kubernetes.io/projected/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-kube-api-access-g9x8k\") pod \"nova-cell1-conductor-db-sync-4257c\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.178260 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.384037 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-6gwj6"] Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.441078 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.471011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.546434 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19bf42d-1611-4cff-a0a0-ea1e69019397","Type":"ContainerStarted","Data":"31337c0420908ebc1fac854bb277afc17efd490fac96635c1ce27894eb8de331"} Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.561387 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ncgt2" event={"ID":"515c1bfb-ed4e-4848-837d-aed1c1e5fd53","Type":"ContainerStarted","Data":"2e7233c23536b4233184ebbf682296735ecda51cfbd1316779174fbdfcd8bada"} Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.561432 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ncgt2" event={"ID":"515c1bfb-ed4e-4848-837d-aed1c1e5fd53","Type":"ContainerStarted","Data":"213f1de2e30ca70a9a8c285a0570a9bd3a25bdc7b8c42a20b45e0c11f3125c21"} Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.620176 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00455947-f2da-4823-8bcc-8a589f1f475b","Type":"ContainerStarted","Data":"c114eefd20da283b14c4820dd51872415d0806c1f7c9899668ec6c6e093f73b6"} Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.637826 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-scscp" event={"ID":"218dc4f4-1ad8-4106-a954-73e6b2e7359f","Type":"ContainerStarted","Data":"72375b38a2ea891d8b2d1b1b68edab1745dba210f80b4ec41fc25e6bcd78a596"} Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.646612 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ncgt2" podStartSLOduration=2.6465870860000003 podStartE2EDuration="2.646587086s" podCreationTimestamp="2025-11-21 20:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:10.587500597 +0000 UTC m=+1355.773685641" watchObservedRunningTime="2025-11-21 20:29:10.646587086 +0000 UTC m=+1355.832772130" Nov 21 20:29:10 crc kubenswrapper[4727]: I1121 20:29:10.881003 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4257c"] Nov 21 20:29:10 crc kubenswrapper[4727]: W1121 20:29:10.920128 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b71f3e_eabc_4628_ad77_e1e9144d0cda.slice/crio-9cc91e2ddbce98d87ca1981d740ee66a87929bea47b3ca513fbdcb8e8ca00f4a WatchSource:0}: Error finding container 9cc91e2ddbce98d87ca1981d740ee66a87929bea47b3ca513fbdcb8e8ca00f4a: Status 404 returned error can't find the container with id 9cc91e2ddbce98d87ca1981d740ee66a87929bea47b3ca513fbdcb8e8ca00f4a Nov 21 20:29:11 crc kubenswrapper[4727]: I1121 20:29:11.685133 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4257c" event={"ID":"a6b71f3e-eabc-4628-ad77-e1e9144d0cda","Type":"ContainerStarted","Data":"cf27b6bb31ea6c822f6a5c2c91e7939a02cbebe6e8c1067510284817a6373251"} Nov 21 20:29:11 crc kubenswrapper[4727]: I1121 20:29:11.685676 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4257c" event={"ID":"a6b71f3e-eabc-4628-ad77-e1e9144d0cda","Type":"ContainerStarted","Data":"9cc91e2ddbce98d87ca1981d740ee66a87929bea47b3ca513fbdcb8e8ca00f4a"} Nov 21 20:29:11 crc kubenswrapper[4727]: I1121 20:29:11.687770 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8205686-4928-4b1b-a38c-7c532510b82a","Type":"ContainerStarted","Data":"d2d4d927052e75e3fc43554760f26b807eeb59f0a8ea1461ac94b200911c397d"} Nov 21 20:29:11 crc kubenswrapper[4727]: I1121 20:29:11.690139 4727 generic.go:334] "Generic (PLEG): container finished" podID="4ca9df53-c075-43ff-9f60-c3533e94e265" containerID="001061385a6b4431ad3e3a3a1145b7d7205746a7c1998328be0c09f0f442e3af" exitCode=0 Nov 21 20:29:11 crc kubenswrapper[4727]: I1121 20:29:11.690369 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" event={"ID":"4ca9df53-c075-43ff-9f60-c3533e94e265","Type":"ContainerDied","Data":"001061385a6b4431ad3e3a3a1145b7d7205746a7c1998328be0c09f0f442e3af"} Nov 21 20:29:11 crc kubenswrapper[4727]: I1121 20:29:11.690402 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" event={"ID":"4ca9df53-c075-43ff-9f60-c3533e94e265","Type":"ContainerStarted","Data":"96c8ab71d2f7b3b35350d0288a683eca80516f26d335381652428b627fdbe9ea"} Nov 21 20:29:11 crc kubenswrapper[4727]: I1121 20:29:11.695118 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60ea202f-6cf5-4e18-b267-1ea18cd187fd","Type":"ContainerStarted","Data":"d63e833131c02877d666ea245caac0748c4038961a6f249d4300f793e5ae5953"} Nov 21 20:29:11 crc kubenswrapper[4727]: I1121 20:29:11.709409 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4257c" podStartSLOduration=2.709387294 podStartE2EDuration="2.709387294s" podCreationTimestamp="2025-11-21 20:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:11.703145973 +0000 UTC m=+1356.889331017" watchObservedRunningTime="2025-11-21 20:29:11.709387294 +0000 UTC m=+1356.895572338" Nov 21 20:29:12 crc kubenswrapper[4727]: I1121 20:29:12.344833 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:12 crc kubenswrapper[4727]: I1121 20:29:12.366459 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 20:29:13 crc kubenswrapper[4727]: I1121 20:29:13.335367 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:29:13 crc kubenswrapper[4727]: I1121 20:29:13.335669 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.771433 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" event={"ID":"4ca9df53-c075-43ff-9f60-c3533e94e265","Type":"ContainerStarted","Data":"74afa0eb8d12a86508fdca6fb3e03955b33e1b580505b20dc062a2bc24c4af84"} Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.772014 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.784349 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-scscp" event={"ID":"218dc4f4-1ad8-4106-a954-73e6b2e7359f","Type":"ContainerStarted","Data":"104a5a09a5deab1acf7fa3079e6784225a5aef1f98855e12798972f9483ca623"} Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.793935 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60ea202f-6cf5-4e18-b267-1ea18cd187fd","Type":"ContainerStarted","Data":"156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a"} Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.794179 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="60ea202f-6cf5-4e18-b267-1ea18cd187fd" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a" gracePeriod=30 Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.796560 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" podStartSLOduration=8.796543057000001 podStartE2EDuration="8.796543057s" podCreationTimestamp="2025-11-21 20:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:16.792232423 +0000 UTC m=+1361.978417467" watchObservedRunningTime="2025-11-21 20:29:16.796543057 +0000 UTC m=+1361.982728101" Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.802579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19bf42d-1611-4cff-a0a0-ea1e69019397","Type":"ContainerStarted","Data":"f86e385323ca51ceab49cb0ed745e83cc7ee136de2c589af0d1a6f5ee807b8c9"} Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.816646 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-scscp" podStartSLOduration=2.141304013 podStartE2EDuration="8.816625303s" podCreationTimestamp="2025-11-21 20:29:08 +0000 UTC" firstStartedPulling="2025-11-21 20:29:09.716740814 +0000 UTC m=+1354.902925868" lastFinishedPulling="2025-11-21 20:29:16.392062114 +0000 UTC m=+1361.578247158" observedRunningTime="2025-11-21 20:29:16.807686867 +0000 UTC m=+1361.993871901" watchObservedRunningTime="2025-11-21 20:29:16.816625303 +0000 UTC m=+1362.002810347" Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.816939 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00455947-f2da-4823-8bcc-8a589f1f475b","Type":"ContainerStarted","Data":"0a5a870ab4d78e46fa4e62a4ab20143baa18e2d2877ac29ff07d18942bf76c83"} Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.820581 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8205686-4928-4b1b-a38c-7c532510b82a","Type":"ContainerStarted","Data":"5b92da622ec59e3c4baa6cf866bb2ced7ae448854b8df618800495427e8624de"} Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.830513 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.0055782 podStartE2EDuration="8.830494569s" podCreationTimestamp="2025-11-21 20:29:08 +0000 UTC" firstStartedPulling="2025-11-21 20:29:10.459206673 +0000 UTC m=+1355.645391717" lastFinishedPulling="2025-11-21 20:29:16.284123042 +0000 UTC m=+1361.470308086" observedRunningTime="2025-11-21 20:29:16.829649378 +0000 UTC m=+1362.015834422" watchObservedRunningTime="2025-11-21 20:29:16.830494569 +0000 UTC m=+1362.016679603" Nov 21 20:29:16 crc kubenswrapper[4727]: I1121 20:29:16.857999 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.554847736 podStartE2EDuration="8.857980003s" podCreationTimestamp="2025-11-21 20:29:08 +0000 UTC" firstStartedPulling="2025-11-21 20:29:09.931174571 +0000 UTC m=+1355.117359615" lastFinishedPulling="2025-11-21 20:29:16.234306838 +0000 UTC m=+1361.420491882" observedRunningTime="2025-11-21 20:29:16.856335134 +0000 UTC m=+1362.042520178" watchObservedRunningTime="2025-11-21 20:29:16.857980003 +0000 UTC m=+1362.044165047" Nov 21 20:29:17 crc kubenswrapper[4727]: I1121 20:29:17.842152 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8205686-4928-4b1b-a38c-7c532510b82a","Type":"ContainerStarted","Data":"8dbd12d6c6a4445cfbeb29c7ef2c35317985b0a95adda3218ae831e281d7f9d3"} Nov 21 20:29:17 crc kubenswrapper[4727]: I1121 20:29:17.842354 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b8205686-4928-4b1b-a38c-7c532510b82a" containerName="nova-metadata-log" containerID="cri-o://5b92da622ec59e3c4baa6cf866bb2ced7ae448854b8df618800495427e8624de" gracePeriod=30 Nov 21 20:29:17 crc kubenswrapper[4727]: I1121 20:29:17.842610 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b8205686-4928-4b1b-a38c-7c532510b82a" containerName="nova-metadata-metadata" containerID="cri-o://8dbd12d6c6a4445cfbeb29c7ef2c35317985b0a95adda3218ae831e281d7f9d3" gracePeriod=30 Nov 21 20:29:17 crc kubenswrapper[4727]: I1121 20:29:17.858499 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19bf42d-1611-4cff-a0a0-ea1e69019397","Type":"ContainerStarted","Data":"a2198cdf3dfb20d7909cc257b0f64410d87d5706e93512f7ade49536ab08c340"} Nov 21 20:29:17 crc kubenswrapper[4727]: I1121 20:29:17.875477 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.28679244 podStartE2EDuration="9.875452315s" podCreationTimestamp="2025-11-21 20:29:08 +0000 UTC" firstStartedPulling="2025-11-21 20:29:10.647156859 +0000 UTC m=+1355.833341903" lastFinishedPulling="2025-11-21 20:29:16.235816734 +0000 UTC m=+1361.422001778" observedRunningTime="2025-11-21 20:29:17.87194074 +0000 UTC m=+1363.058125794" watchObservedRunningTime="2025-11-21 20:29:17.875452315 +0000 UTC m=+1363.061637359" Nov 21 20:29:17 crc kubenswrapper[4727]: I1121 20:29:17.904006 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.385472908 podStartE2EDuration="9.903985775s" podCreationTimestamp="2025-11-21 20:29:08 +0000 UTC" firstStartedPulling="2025-11-21 20:29:09.717139233 +0000 UTC m=+1354.903324277" lastFinishedPulling="2025-11-21 20:29:16.2356521 +0000 UTC m=+1361.421837144" observedRunningTime="2025-11-21 20:29:17.896451533 +0000 UTC m=+1363.082636587" watchObservedRunningTime="2025-11-21 20:29:17.903985775 +0000 UTC m=+1363.090170829" Nov 21 20:29:18 crc kubenswrapper[4727]: I1121 20:29:18.892446 4727 generic.go:334] "Generic (PLEG): container finished" podID="b8205686-4928-4b1b-a38c-7c532510b82a" containerID="8dbd12d6c6a4445cfbeb29c7ef2c35317985b0a95adda3218ae831e281d7f9d3" exitCode=0 Nov 21 20:29:18 crc kubenswrapper[4727]: I1121 20:29:18.892768 4727 generic.go:334] "Generic (PLEG): container finished" podID="b8205686-4928-4b1b-a38c-7c532510b82a" containerID="5b92da622ec59e3c4baa6cf866bb2ced7ae448854b8df618800495427e8624de" exitCode=143 Nov 21 20:29:18 crc kubenswrapper[4727]: I1121 20:29:18.892544 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8205686-4928-4b1b-a38c-7c532510b82a","Type":"ContainerDied","Data":"8dbd12d6c6a4445cfbeb29c7ef2c35317985b0a95adda3218ae831e281d7f9d3"} Nov 21 20:29:18 crc kubenswrapper[4727]: I1121 20:29:18.892816 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8205686-4928-4b1b-a38c-7c532510b82a","Type":"ContainerDied","Data":"5b92da622ec59e3c4baa6cf866bb2ced7ae448854b8df618800495427e8624de"} Nov 21 20:29:18 crc kubenswrapper[4727]: I1121 20:29:18.895742 4727 generic.go:334] "Generic (PLEG): container finished" podID="515c1bfb-ed4e-4848-837d-aed1c1e5fd53" containerID="2e7233c23536b4233184ebbf682296735ecda51cfbd1316779174fbdfcd8bada" exitCode=0 Nov 21 20:29:18 crc kubenswrapper[4727]: I1121 20:29:18.896002 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ncgt2" event={"ID":"515c1bfb-ed4e-4848-837d-aed1c1e5fd53","Type":"ContainerDied","Data":"2e7233c23536b4233184ebbf682296735ecda51cfbd1316779174fbdfcd8bada"} Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.001128 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.001846 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.031800 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.032339 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.036669 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.040528 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.040687 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.070183 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.159839 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-combined-ca-bundle\") pod \"b8205686-4928-4b1b-a38c-7c532510b82a\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.160108 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhqfv\" (UniqueName: \"kubernetes.io/projected/b8205686-4928-4b1b-a38c-7c532510b82a-kube-api-access-nhqfv\") pod \"b8205686-4928-4b1b-a38c-7c532510b82a\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.160272 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-config-data\") pod \"b8205686-4928-4b1b-a38c-7c532510b82a\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.160314 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8205686-4928-4b1b-a38c-7c532510b82a-logs\") pod \"b8205686-4928-4b1b-a38c-7c532510b82a\" (UID: \"b8205686-4928-4b1b-a38c-7c532510b82a\") " Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.161525 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8205686-4928-4b1b-a38c-7c532510b82a-logs" (OuterVolumeSpecName: "logs") pod "b8205686-4928-4b1b-a38c-7c532510b82a" (UID: "b8205686-4928-4b1b-a38c-7c532510b82a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.164036 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.177926 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8205686-4928-4b1b-a38c-7c532510b82a-kube-api-access-nhqfv" (OuterVolumeSpecName: "kube-api-access-nhqfv") pod "b8205686-4928-4b1b-a38c-7c532510b82a" (UID: "b8205686-4928-4b1b-a38c-7c532510b82a"). InnerVolumeSpecName "kube-api-access-nhqfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.216255 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8205686-4928-4b1b-a38c-7c532510b82a" (UID: "b8205686-4928-4b1b-a38c-7c532510b82a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.236152 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-config-data" (OuterVolumeSpecName: "config-data") pod "b8205686-4928-4b1b-a38c-7c532510b82a" (UID: "b8205686-4928-4b1b-a38c-7c532510b82a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.262524 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.262564 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8205686-4928-4b1b-a38c-7c532510b82a-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.262572 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8205686-4928-4b1b-a38c-7c532510b82a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.262583 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhqfv\" (UniqueName: \"kubernetes.io/projected/b8205686-4928-4b1b-a38c-7c532510b82a-kube-api-access-nhqfv\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.913067 4727 generic.go:334] "Generic (PLEG): container finished" podID="218dc4f4-1ad8-4106-a954-73e6b2e7359f" containerID="104a5a09a5deab1acf7fa3079e6784225a5aef1f98855e12798972f9483ca623" exitCode=0 Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.913472 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-scscp" event={"ID":"218dc4f4-1ad8-4106-a954-73e6b2e7359f","Type":"ContainerDied","Data":"104a5a09a5deab1acf7fa3079e6784225a5aef1f98855e12798972f9483ca623"} Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.915315 4727 generic.go:334] "Generic (PLEG): container finished" podID="a6b71f3e-eabc-4628-ad77-e1e9144d0cda" containerID="cf27b6bb31ea6c822f6a5c2c91e7939a02cbebe6e8c1067510284817a6373251" exitCode=0 Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.915395 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4257c" event={"ID":"a6b71f3e-eabc-4628-ad77-e1e9144d0cda","Type":"ContainerDied","Data":"cf27b6bb31ea6c822f6a5c2c91e7939a02cbebe6e8c1067510284817a6373251"} Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.919457 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8205686-4928-4b1b-a38c-7c532510b82a","Type":"ContainerDied","Data":"d2d4d927052e75e3fc43554760f26b807eeb59f0a8ea1461ac94b200911c397d"} Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.919529 4727 scope.go:117] "RemoveContainer" containerID="8dbd12d6c6a4445cfbeb29c7ef2c35317985b0a95adda3218ae831e281d7f9d3" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.919840 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.967714 4727 scope.go:117] "RemoveContainer" containerID="5b92da622ec59e3c4baa6cf866bb2ced7ae448854b8df618800495427e8624de" Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.983338 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:19 crc kubenswrapper[4727]: I1121 20:29:19.997324 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.021532 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.045874 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:20 crc kubenswrapper[4727]: E1121 20:29:20.058817 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8205686-4928-4b1b-a38c-7c532510b82a" containerName="nova-metadata-metadata" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.058859 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8205686-4928-4b1b-a38c-7c532510b82a" containerName="nova-metadata-metadata" Nov 21 20:29:20 crc kubenswrapper[4727]: E1121 20:29:20.058991 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8205686-4928-4b1b-a38c-7c532510b82a" containerName="nova-metadata-log" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.059002 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8205686-4928-4b1b-a38c-7c532510b82a" containerName="nova-metadata-log" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.066081 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8205686-4928-4b1b-a38c-7c532510b82a" containerName="nova-metadata-metadata" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.066149 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8205686-4928-4b1b-a38c-7c532510b82a" containerName="nova-metadata-log" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.070197 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.077267 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.077568 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.111202 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.149198 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.237:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.149198 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.237:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.213217 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.213633 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b646880-1d41-4505-9657-7f7f4973d3a8-logs\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.213655 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.213676 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-config-data\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.213766 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jftcv\" (UniqueName: \"kubernetes.io/projected/4b646880-1d41-4505-9657-7f7f4973d3a8-kube-api-access-jftcv\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.315740 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b646880-1d41-4505-9657-7f7f4973d3a8-logs\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.315787 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.315811 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-config-data\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.315880 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jftcv\" (UniqueName: \"kubernetes.io/projected/4b646880-1d41-4505-9657-7f7f4973d3a8-kube-api-access-jftcv\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.315934 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.316970 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b646880-1d41-4505-9657-7f7f4973d3a8-logs\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.322357 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-config-data\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.323052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.333777 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jftcv\" (UniqueName: \"kubernetes.io/projected/4b646880-1d41-4505-9657-7f7f4973d3a8-kube-api-access-jftcv\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.338112 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.413268 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.577282 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.620588 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-scripts\") pod \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.620632 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l4vt\" (UniqueName: \"kubernetes.io/projected/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-kube-api-access-7l4vt\") pod \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.620695 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-combined-ca-bundle\") pod \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.620814 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-config-data\") pod \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\" (UID: \"515c1bfb-ed4e-4848-837d-aed1c1e5fd53\") " Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.625381 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-scripts" (OuterVolumeSpecName: "scripts") pod "515c1bfb-ed4e-4848-837d-aed1c1e5fd53" (UID: "515c1bfb-ed4e-4848-837d-aed1c1e5fd53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.629547 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-kube-api-access-7l4vt" (OuterVolumeSpecName: "kube-api-access-7l4vt") pod "515c1bfb-ed4e-4848-837d-aed1c1e5fd53" (UID: "515c1bfb-ed4e-4848-837d-aed1c1e5fd53"). InnerVolumeSpecName "kube-api-access-7l4vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.663312 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "515c1bfb-ed4e-4848-837d-aed1c1e5fd53" (UID: "515c1bfb-ed4e-4848-837d-aed1c1e5fd53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.682269 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-config-data" (OuterVolumeSpecName: "config-data") pod "515c1bfb-ed4e-4848-837d-aed1c1e5fd53" (UID: "515c1bfb-ed4e-4848-837d-aed1c1e5fd53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.723916 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.723950 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.723983 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.723994 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l4vt\" (UniqueName: \"kubernetes.io/projected/515c1bfb-ed4e-4848-837d-aed1c1e5fd53-kube-api-access-7l4vt\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.933969 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ncgt2" event={"ID":"515c1bfb-ed4e-4848-837d-aed1c1e5fd53","Type":"ContainerDied","Data":"213f1de2e30ca70a9a8c285a0570a9bd3a25bdc7b8c42a20b45e0c11f3125c21"} Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.933999 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ncgt2" Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.934008 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="213f1de2e30ca70a9a8c285a0570a9bd3a25bdc7b8c42a20b45e0c11f3125c21" Nov 21 20:29:20 crc kubenswrapper[4727]: W1121 20:29:20.993152 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b646880_1d41_4505_9657_7f7f4973d3a8.slice/crio-760b8bbdf956b2a6ffb012ed39a5372461bc5c7e0285b9e409027259ca1cb870 WatchSource:0}: Error finding container 760b8bbdf956b2a6ffb012ed39a5372461bc5c7e0285b9e409027259ca1cb870: Status 404 returned error can't find the container with id 760b8bbdf956b2a6ffb012ed39a5372461bc5c7e0285b9e409027259ca1cb870 Nov 21 20:29:20 crc kubenswrapper[4727]: I1121 20:29:20.994688 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.109012 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.149747 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.171056 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.514275 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8205686-4928-4b1b-a38c-7c532510b82a" path="/var/lib/kubelet/pods/b8205686-4928-4b1b-a38c-7c532510b82a/volumes" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.559124 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.565180 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.653134 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms79z\" (UniqueName: \"kubernetes.io/projected/218dc4f4-1ad8-4106-a954-73e6b2e7359f-kube-api-access-ms79z\") pod \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.653507 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-combined-ca-bundle\") pod \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.653681 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-scripts\") pod \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.653748 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-config-data\") pod \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.653936 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-config-data\") pod \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.653979 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-scripts\") pod \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.654001 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9x8k\" (UniqueName: \"kubernetes.io/projected/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-kube-api-access-g9x8k\") pod \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\" (UID: \"a6b71f3e-eabc-4628-ad77-e1e9144d0cda\") " Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.654083 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-combined-ca-bundle\") pod \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\" (UID: \"218dc4f4-1ad8-4106-a954-73e6b2e7359f\") " Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.661594 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218dc4f4-1ad8-4106-a954-73e6b2e7359f-kube-api-access-ms79z" (OuterVolumeSpecName: "kube-api-access-ms79z") pod "218dc4f4-1ad8-4106-a954-73e6b2e7359f" (UID: "218dc4f4-1ad8-4106-a954-73e6b2e7359f"). InnerVolumeSpecName "kube-api-access-ms79z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.663498 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-scripts" (OuterVolumeSpecName: "scripts") pod "a6b71f3e-eabc-4628-ad77-e1e9144d0cda" (UID: "a6b71f3e-eabc-4628-ad77-e1e9144d0cda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.666080 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-kube-api-access-g9x8k" (OuterVolumeSpecName: "kube-api-access-g9x8k") pod "a6b71f3e-eabc-4628-ad77-e1e9144d0cda" (UID: "a6b71f3e-eabc-4628-ad77-e1e9144d0cda"). InnerVolumeSpecName "kube-api-access-g9x8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.666644 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-scripts" (OuterVolumeSpecName: "scripts") pod "218dc4f4-1ad8-4106-a954-73e6b2e7359f" (UID: "218dc4f4-1ad8-4106-a954-73e6b2e7359f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.695209 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "218dc4f4-1ad8-4106-a954-73e6b2e7359f" (UID: "218dc4f4-1ad8-4106-a954-73e6b2e7359f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.701295 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-config-data" (OuterVolumeSpecName: "config-data") pod "a6b71f3e-eabc-4628-ad77-e1e9144d0cda" (UID: "a6b71f3e-eabc-4628-ad77-e1e9144d0cda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.709269 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6b71f3e-eabc-4628-ad77-e1e9144d0cda" (UID: "a6b71f3e-eabc-4628-ad77-e1e9144d0cda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.709720 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-config-data" (OuterVolumeSpecName: "config-data") pod "218dc4f4-1ad8-4106-a954-73e6b2e7359f" (UID: "218dc4f4-1ad8-4106-a954-73e6b2e7359f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.756509 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.756542 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms79z\" (UniqueName: \"kubernetes.io/projected/218dc4f4-1ad8-4106-a954-73e6b2e7359f-kube-api-access-ms79z\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.756553 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.756562 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.756572 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/218dc4f4-1ad8-4106-a954-73e6b2e7359f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.756583 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.756595 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.756605 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9x8k\" (UniqueName: \"kubernetes.io/projected/a6b71f3e-eabc-4628-ad77-e1e9144d0cda-kube-api-access-g9x8k\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.962526 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4b646880-1d41-4505-9657-7f7f4973d3a8","Type":"ContainerStarted","Data":"2dad4f4cbd53eece37ff51f1ae81443e4b8ba61708868ffb93db64381101d367"} Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.962574 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4b646880-1d41-4505-9657-7f7f4973d3a8","Type":"ContainerStarted","Data":"2391e6863860ef5a057a1bd8c6c2c805fbe353b2961b826a8746777ba4cca569"} Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.962584 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4b646880-1d41-4505-9657-7f7f4973d3a8","Type":"ContainerStarted","Data":"760b8bbdf956b2a6ffb012ed39a5372461bc5c7e0285b9e409027259ca1cb870"} Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.962589 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4b646880-1d41-4505-9657-7f7f4973d3a8" containerName="nova-metadata-log" containerID="cri-o://2391e6863860ef5a057a1bd8c6c2c805fbe353b2961b826a8746777ba4cca569" gracePeriod=30 Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.962698 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4b646880-1d41-4505-9657-7f7f4973d3a8" containerName="nova-metadata-metadata" containerID="cri-o://2dad4f4cbd53eece37ff51f1ae81443e4b8ba61708868ffb93db64381101d367" gracePeriod=30 Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.971595 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4257c" event={"ID":"a6b71f3e-eabc-4628-ad77-e1e9144d0cda","Type":"ContainerDied","Data":"9cc91e2ddbce98d87ca1981d740ee66a87929bea47b3ca513fbdcb8e8ca00f4a"} Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.971729 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc91e2ddbce98d87ca1981d740ee66a87929bea47b3ca513fbdcb8e8ca00f4a" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.971824 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4257c" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.986594 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-scscp" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.987066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-scscp" event={"ID":"218dc4f4-1ad8-4106-a954-73e6b2e7359f","Type":"ContainerDied","Data":"72375b38a2ea891d8b2d1b1b68edab1745dba210f80b4ec41fc25e6bcd78a596"} Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.987091 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72375b38a2ea891d8b2d1b1b68edab1745dba210f80b4ec41fc25e6bcd78a596" Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.987330 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerName="nova-api-log" containerID="cri-o://f86e385323ca51ceab49cb0ed745e83cc7ee136de2c589af0d1a6f5ee807b8c9" gracePeriod=30 Nov 21 20:29:21 crc kubenswrapper[4727]: I1121 20:29:21.987541 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerName="nova-api-api" containerID="cri-o://a2198cdf3dfb20d7909cc257b0f64410d87d5706e93512f7ade49536ab08c340" gracePeriod=30 Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.005799 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.005753473 podStartE2EDuration="3.005753473s" podCreationTimestamp="2025-11-21 20:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:21.9944564 +0000 UTC m=+1367.180641444" watchObservedRunningTime="2025-11-21 20:29:22.005753473 +0000 UTC m=+1367.191938517" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.076343 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 20:29:22 crc kubenswrapper[4727]: E1121 20:29:22.076898 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218dc4f4-1ad8-4106-a954-73e6b2e7359f" containerName="aodh-db-sync" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.076917 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="218dc4f4-1ad8-4106-a954-73e6b2e7359f" containerName="aodh-db-sync" Nov 21 20:29:22 crc kubenswrapper[4727]: E1121 20:29:22.076947 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515c1bfb-ed4e-4848-837d-aed1c1e5fd53" containerName="nova-manage" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.076967 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="515c1bfb-ed4e-4848-837d-aed1c1e5fd53" containerName="nova-manage" Nov 21 20:29:22 crc kubenswrapper[4727]: E1121 20:29:22.077005 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b71f3e-eabc-4628-ad77-e1e9144d0cda" containerName="nova-cell1-conductor-db-sync" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.077014 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b71f3e-eabc-4628-ad77-e1e9144d0cda" containerName="nova-cell1-conductor-db-sync" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.077258 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="515c1bfb-ed4e-4848-837d-aed1c1e5fd53" containerName="nova-manage" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.077289 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b71f3e-eabc-4628-ad77-e1e9144d0cda" containerName="nova-cell1-conductor-db-sync" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.077301 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="218dc4f4-1ad8-4106-a954-73e6b2e7359f" containerName="aodh-db-sync" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.083532 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.087950 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.144628 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.165649 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d634e409-018c-4c23-a01b-8abbfc218164-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d634e409-018c-4c23-a01b-8abbfc218164\") " pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.165716 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d634e409-018c-4c23-a01b-8abbfc218164-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d634e409-018c-4c23-a01b-8abbfc218164\") " pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.165752 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vs2\" (UniqueName: \"kubernetes.io/projected/d634e409-018c-4c23-a01b-8abbfc218164-kube-api-access-t4vs2\") pod \"nova-cell1-conductor-0\" (UID: \"d634e409-018c-4c23-a01b-8abbfc218164\") " pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.268501 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d634e409-018c-4c23-a01b-8abbfc218164-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d634e409-018c-4c23-a01b-8abbfc218164\") " pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.268586 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d634e409-018c-4c23-a01b-8abbfc218164-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d634e409-018c-4c23-a01b-8abbfc218164\") " pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.268622 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vs2\" (UniqueName: \"kubernetes.io/projected/d634e409-018c-4c23-a01b-8abbfc218164-kube-api-access-t4vs2\") pod \"nova-cell1-conductor-0\" (UID: \"d634e409-018c-4c23-a01b-8abbfc218164\") " pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.278861 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d634e409-018c-4c23-a01b-8abbfc218164-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d634e409-018c-4c23-a01b-8abbfc218164\") " pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.279024 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d634e409-018c-4c23-a01b-8abbfc218164-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d634e409-018c-4c23-a01b-8abbfc218164\") " pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.286884 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vs2\" (UniqueName: \"kubernetes.io/projected/d634e409-018c-4c23-a01b-8abbfc218164-kube-api-access-t4vs2\") pod \"nova-cell1-conductor-0\" (UID: \"d634e409-018c-4c23-a01b-8abbfc218164\") " pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:22 crc kubenswrapper[4727]: I1121 20:29:22.411440 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.005484 4727 generic.go:334] "Generic (PLEG): container finished" podID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerID="f86e385323ca51ceab49cb0ed745e83cc7ee136de2c589af0d1a6f5ee807b8c9" exitCode=143 Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.006184 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19bf42d-1611-4cff-a0a0-ea1e69019397","Type":"ContainerDied","Data":"f86e385323ca51ceab49cb0ed745e83cc7ee136de2c589af0d1a6f5ee807b8c9"} Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.009861 4727 generic.go:334] "Generic (PLEG): container finished" podID="4b646880-1d41-4505-9657-7f7f4973d3a8" containerID="2dad4f4cbd53eece37ff51f1ae81443e4b8ba61708868ffb93db64381101d367" exitCode=0 Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.009885 4727 generic.go:334] "Generic (PLEG): container finished" podID="4b646880-1d41-4505-9657-7f7f4973d3a8" containerID="2391e6863860ef5a057a1bd8c6c2c805fbe353b2961b826a8746777ba4cca569" exitCode=143 Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.009936 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4b646880-1d41-4505-9657-7f7f4973d3a8","Type":"ContainerDied","Data":"2dad4f4cbd53eece37ff51f1ae81443e4b8ba61708868ffb93db64381101d367"} Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.009997 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4b646880-1d41-4505-9657-7f7f4973d3a8","Type":"ContainerDied","Data":"2391e6863860ef5a057a1bd8c6c2c805fbe353b2961b826a8746777ba4cca569"} Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.010012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4b646880-1d41-4505-9657-7f7f4973d3a8","Type":"ContainerDied","Data":"760b8bbdf956b2a6ffb012ed39a5372461bc5c7e0285b9e409027259ca1cb870"} Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.010021 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760b8bbdf956b2a6ffb012ed39a5372461bc5c7e0285b9e409027259ca1cb870" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.010150 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="00455947-f2da-4823-8bcc-8a589f1f475b" containerName="nova-scheduler-scheduler" containerID="cri-o://0a5a870ab4d78e46fa4e62a4ab20143baa18e2d2877ac29ff07d18942bf76c83" gracePeriod=30 Nov 21 20:29:23 crc kubenswrapper[4727]: W1121 20:29:23.017211 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd634e409_018c_4c23_a01b_8abbfc218164.slice/crio-5a479ed565ee9cadb191052d993951a1a526fb09640a10d0ddde5ec61eadafcb WatchSource:0}: Error finding container 5a479ed565ee9cadb191052d993951a1a526fb09640a10d0ddde5ec61eadafcb: Status 404 returned error can't find the container with id 5a479ed565ee9cadb191052d993951a1a526fb09640a10d0ddde5ec61eadafcb Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.020631 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.090025 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.191768 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-nova-metadata-tls-certs\") pod \"4b646880-1d41-4505-9657-7f7f4973d3a8\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.192289 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-combined-ca-bundle\") pod \"4b646880-1d41-4505-9657-7f7f4973d3a8\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.192349 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jftcv\" (UniqueName: \"kubernetes.io/projected/4b646880-1d41-4505-9657-7f7f4973d3a8-kube-api-access-jftcv\") pod \"4b646880-1d41-4505-9657-7f7f4973d3a8\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.192380 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b646880-1d41-4505-9657-7f7f4973d3a8-logs\") pod \"4b646880-1d41-4505-9657-7f7f4973d3a8\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.192402 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-config-data\") pod \"4b646880-1d41-4505-9657-7f7f4973d3a8\" (UID: \"4b646880-1d41-4505-9657-7f7f4973d3a8\") " Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.192809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b646880-1d41-4505-9657-7f7f4973d3a8-logs" (OuterVolumeSpecName: "logs") pod "4b646880-1d41-4505-9657-7f7f4973d3a8" (UID: "4b646880-1d41-4505-9657-7f7f4973d3a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.193333 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b646880-1d41-4505-9657-7f7f4973d3a8-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.198104 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b646880-1d41-4505-9657-7f7f4973d3a8-kube-api-access-jftcv" (OuterVolumeSpecName: "kube-api-access-jftcv") pod "4b646880-1d41-4505-9657-7f7f4973d3a8" (UID: "4b646880-1d41-4505-9657-7f7f4973d3a8"). InnerVolumeSpecName "kube-api-access-jftcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.234440 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-config-data" (OuterVolumeSpecName: "config-data") pod "4b646880-1d41-4505-9657-7f7f4973d3a8" (UID: "4b646880-1d41-4505-9657-7f7f4973d3a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.236204 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b646880-1d41-4505-9657-7f7f4973d3a8" (UID: "4b646880-1d41-4505-9657-7f7f4973d3a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.266547 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4b646880-1d41-4505-9657-7f7f4973d3a8" (UID: "4b646880-1d41-4505-9657-7f7f4973d3a8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.296113 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.296158 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jftcv\" (UniqueName: \"kubernetes.io/projected/4b646880-1d41-4505-9657-7f7f4973d3a8-kube-api-access-jftcv\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.296171 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.296179 4727 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b646880-1d41-4505-9657-7f7f4973d3a8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.438195 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 21 20:29:23 crc kubenswrapper[4727]: E1121 20:29:23.438664 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b646880-1d41-4505-9657-7f7f4973d3a8" containerName="nova-metadata-log" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.438681 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b646880-1d41-4505-9657-7f7f4973d3a8" containerName="nova-metadata-log" Nov 21 20:29:23 crc kubenswrapper[4727]: E1121 20:29:23.438729 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b646880-1d41-4505-9657-7f7f4973d3a8" containerName="nova-metadata-metadata" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.438736 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b646880-1d41-4505-9657-7f7f4973d3a8" containerName="nova-metadata-metadata" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.438999 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b646880-1d41-4505-9657-7f7f4973d3a8" containerName="nova-metadata-metadata" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.439019 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b646880-1d41-4505-9657-7f7f4973d3a8" containerName="nova-metadata-log" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.448639 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.453944 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.453971 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5w2vq" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.454004 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.480675 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.501825 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-scripts\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.501931 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f48b\" (UniqueName: \"kubernetes.io/projected/2e586ddd-f8e8-4589-8b0f-bea576194a57-kube-api-access-7f48b\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.501984 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.502013 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-config-data\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.603724 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f48b\" (UniqueName: \"kubernetes.io/projected/2e586ddd-f8e8-4589-8b0f-bea576194a57-kube-api-access-7f48b\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.603803 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.603851 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-config-data\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.604058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-scripts\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.623993 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-config-data\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.625133 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.625488 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-scripts\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.632514 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f48b\" (UniqueName: \"kubernetes.io/projected/2e586ddd-f8e8-4589-8b0f-bea576194a57-kube-api-access-7f48b\") pod \"aodh-0\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " pod="openstack/aodh-0" Nov 21 20:29:23 crc kubenswrapper[4727]: I1121 20:29:23.765197 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 20:29:24 crc kubenswrapper[4727]: E1121 20:29:24.004506 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a5a870ab4d78e46fa4e62a4ab20143baa18e2d2877ac29ff07d18942bf76c83" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 20:29:24 crc kubenswrapper[4727]: E1121 20:29:24.015301 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a5a870ab4d78e46fa4e62a4ab20143baa18e2d2877ac29ff07d18942bf76c83" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 20:29:24 crc kubenswrapper[4727]: E1121 20:29:24.024046 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a5a870ab4d78e46fa4e62a4ab20143baa18e2d2877ac29ff07d18942bf76c83" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 20:29:24 crc kubenswrapper[4727]: E1121 20:29:24.024366 4727 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="00455947-f2da-4823-8bcc-8a589f1f475b" containerName="nova-scheduler-scheduler" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.040436 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.041909 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d634e409-018c-4c23-a01b-8abbfc218164","Type":"ContainerStarted","Data":"c2dedb36883836d57b1f1301569eec009693a885e494e942c5cd449f7c964ee1"} Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.041983 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d634e409-018c-4c23-a01b-8abbfc218164","Type":"ContainerStarted","Data":"5a479ed565ee9cadb191052d993951a1a526fb09640a10d0ddde5ec61eadafcb"} Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.042312 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.072433 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.072408594 podStartE2EDuration="2.072408594s" podCreationTimestamp="2025-11-21 20:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:24.065921506 +0000 UTC m=+1369.252106560" watchObservedRunningTime="2025-11-21 20:29:24.072408594 +0000 UTC m=+1369.258593628" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.108939 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.125793 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.137127 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.142221 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.151045 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.152326 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.159939 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.257271 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.276965 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-logs\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.277022 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-config-data\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.277092 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcbhb\" (UniqueName: \"kubernetes.io/projected/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-kube-api-access-bcbhb\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.277114 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.277141 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.343888 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g49lf"] Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.344611 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" podUID="4da2f3f3-a035-4d39-8756-6eab26cef3dc" containerName="dnsmasq-dns" containerID="cri-o://b8f0c9890b1c660f63eeb03ee2bb94cb4dfdaf16705e410a2644ae48b11d3fec" gracePeriod=10 Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.379308 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-logs\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.379372 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-config-data\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.379449 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcbhb\" (UniqueName: \"kubernetes.io/projected/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-kube-api-access-bcbhb\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.379474 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.379504 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.380252 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-logs\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.387751 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.392111 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.401395 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-config-data\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.411625 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcbhb\" (UniqueName: \"kubernetes.io/projected/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-kube-api-access-bcbhb\") pod \"nova-metadata-0\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " pod="openstack/nova-metadata-0" Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.424522 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 20:29:24 crc kubenswrapper[4727]: I1121 20:29:24.476769 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:29:25 crc kubenswrapper[4727]: I1121 20:29:25.522045 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b646880-1d41-4505-9657-7f7f4973d3a8" path="/var/lib/kubelet/pods/4b646880-1d41-4505-9657-7f7f4973d3a8/volumes" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.075706 4727 generic.go:334] "Generic (PLEG): container finished" podID="00455947-f2da-4823-8bcc-8a589f1f475b" containerID="0a5a870ab4d78e46fa4e62a4ab20143baa18e2d2877ac29ff07d18942bf76c83" exitCode=0 Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.076566 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00455947-f2da-4823-8bcc-8a589f1f475b","Type":"ContainerDied","Data":"0a5a870ab4d78e46fa4e62a4ab20143baa18e2d2877ac29ff07d18942bf76c83"} Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.078530 4727 generic.go:334] "Generic (PLEG): container finished" podID="4da2f3f3-a035-4d39-8756-6eab26cef3dc" containerID="b8f0c9890b1c660f63eeb03ee2bb94cb4dfdaf16705e410a2644ae48b11d3fec" exitCode=0 Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.078645 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" event={"ID":"4da2f3f3-a035-4d39-8756-6eab26cef3dc","Type":"ContainerDied","Data":"b8f0c9890b1c660f63eeb03ee2bb94cb4dfdaf16705e410a2644ae48b11d3fec"} Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.079785 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e586ddd-f8e8-4589-8b0f-bea576194a57","Type":"ContainerStarted","Data":"80c935f797385e3fc9c22f46f202a846b5b8063c0d35d0c99dc0b62c6ef91188"} Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.081435 4727 generic.go:334] "Generic (PLEG): container finished" podID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerID="a2198cdf3dfb20d7909cc257b0f64410d87d5706e93512f7ade49536ab08c340" exitCode=0 Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.081470 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19bf42d-1611-4cff-a0a0-ea1e69019397","Type":"ContainerDied","Data":"a2198cdf3dfb20d7909cc257b0f64410d87d5706e93512f7ade49536ab08c340"} Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.320745 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.340031 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.345256 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.353439 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.460167 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-combined-ca-bundle\") pod \"b19bf42d-1611-4cff-a0a0-ea1e69019397\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.460492 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-config-data\") pod \"00455947-f2da-4823-8bcc-8a589f1f475b\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.460624 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsjph\" (UniqueName: \"kubernetes.io/projected/b19bf42d-1611-4cff-a0a0-ea1e69019397-kube-api-access-dsjph\") pod \"b19bf42d-1611-4cff-a0a0-ea1e69019397\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.460654 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-combined-ca-bundle\") pod \"00455947-f2da-4823-8bcc-8a589f1f475b\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.460711 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-sb\") pod \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.460745 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19bf42d-1611-4cff-a0a0-ea1e69019397-logs\") pod \"b19bf42d-1611-4cff-a0a0-ea1e69019397\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.460777 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-svc\") pod \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.460847 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-swift-storage-0\") pod \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.460866 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfhd7\" (UniqueName: \"kubernetes.io/projected/4da2f3f3-a035-4d39-8756-6eab26cef3dc-kube-api-access-hfhd7\") pod \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.460917 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-config-data\") pod \"b19bf42d-1611-4cff-a0a0-ea1e69019397\" (UID: \"b19bf42d-1611-4cff-a0a0-ea1e69019397\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.461008 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmrwg\" (UniqueName: \"kubernetes.io/projected/00455947-f2da-4823-8bcc-8a589f1f475b-kube-api-access-wmrwg\") pod \"00455947-f2da-4823-8bcc-8a589f1f475b\" (UID: \"00455947-f2da-4823-8bcc-8a589f1f475b\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.461057 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-config\") pod \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.461110 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-nb\") pod \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\" (UID: \"4da2f3f3-a035-4d39-8756-6eab26cef3dc\") " Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.463768 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19bf42d-1611-4cff-a0a0-ea1e69019397-logs" (OuterVolumeSpecName: "logs") pod "b19bf42d-1611-4cff-a0a0-ea1e69019397" (UID: "b19bf42d-1611-4cff-a0a0-ea1e69019397"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.492057 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19bf42d-1611-4cff-a0a0-ea1e69019397-kube-api-access-dsjph" (OuterVolumeSpecName: "kube-api-access-dsjph") pod "b19bf42d-1611-4cff-a0a0-ea1e69019397" (UID: "b19bf42d-1611-4cff-a0a0-ea1e69019397"). InnerVolumeSpecName "kube-api-access-dsjph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.509295 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00455947-f2da-4823-8bcc-8a589f1f475b-kube-api-access-wmrwg" (OuterVolumeSpecName: "kube-api-access-wmrwg") pod "00455947-f2da-4823-8bcc-8a589f1f475b" (UID: "00455947-f2da-4823-8bcc-8a589f1f475b"). InnerVolumeSpecName "kube-api-access-wmrwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.516398 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da2f3f3-a035-4d39-8756-6eab26cef3dc-kube-api-access-hfhd7" (OuterVolumeSpecName: "kube-api-access-hfhd7") pod "4da2f3f3-a035-4d39-8756-6eab26cef3dc" (UID: "4da2f3f3-a035-4d39-8756-6eab26cef3dc"). InnerVolumeSpecName "kube-api-access-hfhd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.547384 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.569508 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsjph\" (UniqueName: \"kubernetes.io/projected/b19bf42d-1611-4cff-a0a0-ea1e69019397-kube-api-access-dsjph\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.569673 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19bf42d-1611-4cff-a0a0-ea1e69019397-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.569685 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfhd7\" (UniqueName: \"kubernetes.io/projected/4da2f3f3-a035-4d39-8756-6eab26cef3dc-kube-api-access-hfhd7\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.569694 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmrwg\" (UniqueName: \"kubernetes.io/projected/00455947-f2da-4823-8bcc-8a589f1f475b-kube-api-access-wmrwg\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.787377 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-config-data" (OuterVolumeSpecName: "config-data") pod "00455947-f2da-4823-8bcc-8a589f1f475b" (UID: "00455947-f2da-4823-8bcc-8a589f1f475b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.789029 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4da2f3f3-a035-4d39-8756-6eab26cef3dc" (UID: "4da2f3f3-a035-4d39-8756-6eab26cef3dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.799169 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b19bf42d-1611-4cff-a0a0-ea1e69019397" (UID: "b19bf42d-1611-4cff-a0a0-ea1e69019397"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.802592 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4da2f3f3-a035-4d39-8756-6eab26cef3dc" (UID: "4da2f3f3-a035-4d39-8756-6eab26cef3dc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.828567 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00455947-f2da-4823-8bcc-8a589f1f475b" (UID: "00455947-f2da-4823-8bcc-8a589f1f475b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.829629 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4da2f3f3-a035-4d39-8756-6eab26cef3dc" (UID: "4da2f3f3-a035-4d39-8756-6eab26cef3dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.832131 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-config-data" (OuterVolumeSpecName: "config-data") pod "b19bf42d-1611-4cff-a0a0-ea1e69019397" (UID: "b19bf42d-1611-4cff-a0a0-ea1e69019397"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.833270 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4da2f3f3-a035-4d39-8756-6eab26cef3dc" (UID: "4da2f3f3-a035-4d39-8756-6eab26cef3dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.858770 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-config" (OuterVolumeSpecName: "config") pod "4da2f3f3-a035-4d39-8756-6eab26cef3dc" (UID: "4da2f3f3-a035-4d39-8756-6eab26cef3dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.877323 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.877351 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.877361 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.877372 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.877381 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.877389 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.877397 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4da2f3f3-a035-4d39-8756-6eab26cef3dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.877405 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19bf42d-1611-4cff-a0a0-ea1e69019397-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:26 crc kubenswrapper[4727]: I1121 20:29:26.877413 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00455947-f2da-4823-8bcc-8a589f1f475b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.093886 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.093917 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b19bf42d-1611-4cff-a0a0-ea1e69019397","Type":"ContainerDied","Data":"31337c0420908ebc1fac854bb277afc17efd490fac96635c1ce27894eb8de331"} Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.094000 4727 scope.go:117] "RemoveContainer" containerID="a2198cdf3dfb20d7909cc257b0f64410d87d5706e93512f7ade49536ab08c340" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.096481 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00455947-f2da-4823-8bcc-8a589f1f475b","Type":"ContainerDied","Data":"c114eefd20da283b14c4820dd51872415d0806c1f7c9899668ec6c6e093f73b6"} Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.096560 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.100984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26527e17-f1bb-4a28-a8ef-4b7fc0654e91","Type":"ContainerStarted","Data":"6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2"} Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.101023 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26527e17-f1bb-4a28-a8ef-4b7fc0654e91","Type":"ContainerStarted","Data":"6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114"} Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.101035 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26527e17-f1bb-4a28-a8ef-4b7fc0654e91","Type":"ContainerStarted","Data":"f5094b787f50cc67ca20b983c2235374a1d918fae5e39e649fe8869f9c820bf1"} Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.104346 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" event={"ID":"4da2f3f3-a035-4d39-8756-6eab26cef3dc","Type":"ContainerDied","Data":"166f7647111e2e5306a02b3fff884c3b1911d1ef3a4404afae62f1faee0f97f9"} Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.104420 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.110086 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e586ddd-f8e8-4589-8b0f-bea576194a57","Type":"ContainerStarted","Data":"9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276"} Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.117657 4727 scope.go:117] "RemoveContainer" containerID="f86e385323ca51ceab49cb0ed745e83cc7ee136de2c589af0d1a6f5ee807b8c9" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.136890 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.136869441 podStartE2EDuration="3.136869441s" podCreationTimestamp="2025-11-21 20:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:27.124727637 +0000 UTC m=+1372.310912681" watchObservedRunningTime="2025-11-21 20:29:27.136869441 +0000 UTC m=+1372.323054485" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.156368 4727 scope.go:117] "RemoveContainer" containerID="0a5a870ab4d78e46fa4e62a4ab20143baa18e2d2877ac29ff07d18942bf76c83" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.157423 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.173182 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.189657 4727 scope.go:117] "RemoveContainer" containerID="b8f0c9890b1c660f63eeb03ee2bb94cb4dfdaf16705e410a2644ae48b11d3fec" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.192622 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g49lf"] Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.214328 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-g49lf"] Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.238399 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:29:27 crc kubenswrapper[4727]: E1121 20:29:27.238931 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerName="nova-api-log" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.238945 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerName="nova-api-log" Nov 21 20:29:27 crc kubenswrapper[4727]: E1121 20:29:27.238982 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da2f3f3-a035-4d39-8756-6eab26cef3dc" containerName="dnsmasq-dns" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.238989 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da2f3f3-a035-4d39-8756-6eab26cef3dc" containerName="dnsmasq-dns" Nov 21 20:29:27 crc kubenswrapper[4727]: E1121 20:29:27.239022 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerName="nova-api-api" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.239030 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerName="nova-api-api" Nov 21 20:29:27 crc kubenswrapper[4727]: E1121 20:29:27.239044 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00455947-f2da-4823-8bcc-8a589f1f475b" containerName="nova-scheduler-scheduler" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.239050 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="00455947-f2da-4823-8bcc-8a589f1f475b" containerName="nova-scheduler-scheduler" Nov 21 20:29:27 crc kubenswrapper[4727]: E1121 20:29:27.239060 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da2f3f3-a035-4d39-8756-6eab26cef3dc" containerName="init" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.239065 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da2f3f3-a035-4d39-8756-6eab26cef3dc" containerName="init" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.239289 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="00455947-f2da-4823-8bcc-8a589f1f475b" containerName="nova-scheduler-scheduler" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.239305 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerName="nova-api-api" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.239323 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da2f3f3-a035-4d39-8756-6eab26cef3dc" containerName="dnsmasq-dns" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.239331 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" containerName="nova-api-log" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.240141 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.243598 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.266026 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.283922 4727 scope.go:117] "RemoveContainer" containerID="873311e1f2d89f86612be9947cce3cb1d9b15a3f9281ac6ee5aa7010cb805877" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.286786 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.298279 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.320913 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.328652 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.333385 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.373122 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.389409 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-config-data\") pod \"nova-scheduler-0\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.389676 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsrbt\" (UniqueName: \"kubernetes.io/projected/4932d865-5b5e-481d-bf79-68f449c97745-kube-api-access-xsrbt\") pod \"nova-scheduler-0\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.392026 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.498220 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbv8\" (UniqueName: \"kubernetes.io/projected/84b9f247-2a55-4a6d-9b15-6919248aac8d-kube-api-access-qfbv8\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.498295 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-config-data\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.498340 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-config-data\") pod \"nova-scheduler-0\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.498380 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsrbt\" (UniqueName: \"kubernetes.io/projected/4932d865-5b5e-481d-bf79-68f449c97745-kube-api-access-xsrbt\") pod \"nova-scheduler-0\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.498443 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b9f247-2a55-4a6d-9b15-6919248aac8d-logs\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.498519 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.498542 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.507496 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.507942 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-config-data\") pod \"nova-scheduler-0\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.527243 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00455947-f2da-4823-8bcc-8a589f1f475b" path="/var/lib/kubelet/pods/00455947-f2da-4823-8bcc-8a589f1f475b/volumes" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.528404 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da2f3f3-a035-4d39-8756-6eab26cef3dc" path="/var/lib/kubelet/pods/4da2f3f3-a035-4d39-8756-6eab26cef3dc/volumes" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.529090 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19bf42d-1611-4cff-a0a0-ea1e69019397" path="/var/lib/kubelet/pods/b19bf42d-1611-4cff-a0a0-ea1e69019397/volumes" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.550157 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsrbt\" (UniqueName: \"kubernetes.io/projected/4932d865-5b5e-481d-bf79-68f449c97745-kube-api-access-xsrbt\") pod \"nova-scheduler-0\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.555767 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.582650 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.605274 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-config-data\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.605459 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b9f247-2a55-4a6d-9b15-6919248aac8d-logs\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.605633 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.605811 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbv8\" (UniqueName: \"kubernetes.io/projected/84b9f247-2a55-4a6d-9b15-6919248aac8d-kube-api-access-qfbv8\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.607675 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b9f247-2a55-4a6d-9b15-6919248aac8d-logs\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.611156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-config-data\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.635664 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbv8\" (UniqueName: \"kubernetes.io/projected/84b9f247-2a55-4a6d-9b15-6919248aac8d-kube-api-access-qfbv8\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.653558 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " pod="openstack/nova-api-0" Nov 21 20:29:27 crc kubenswrapper[4727]: I1121 20:29:27.698548 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:29:28 crc kubenswrapper[4727]: I1121 20:29:28.159271 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:28 crc kubenswrapper[4727]: I1121 20:29:28.160026 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="ceilometer-central-agent" containerID="cri-o://b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184" gracePeriod=30 Nov 21 20:29:28 crc kubenswrapper[4727]: I1121 20:29:28.160189 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="proxy-httpd" containerID="cri-o://c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b" gracePeriod=30 Nov 21 20:29:28 crc kubenswrapper[4727]: I1121 20:29:28.160239 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="sg-core" containerID="cri-o://b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad" gracePeriod=30 Nov 21 20:29:28 crc kubenswrapper[4727]: I1121 20:29:28.160283 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="ceilometer-notification-agent" containerID="cri-o://ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018" gracePeriod=30 Nov 21 20:29:28 crc kubenswrapper[4727]: I1121 20:29:28.251746 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:29:28 crc kubenswrapper[4727]: I1121 20:29:28.264830 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:28 crc kubenswrapper[4727]: W1121 20:29:28.613680 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84b9f247_2a55_4a6d_9b15_6919248aac8d.slice/crio-023788f693efa5f6765b548abc4b5d572b5739df6fcd7ac105433e1f56a6b111 WatchSource:0}: Error finding container 023788f693efa5f6765b548abc4b5d572b5739df6fcd7ac105433e1f56a6b111: Status 404 returned error can't find the container with id 023788f693efa5f6765b548abc4b5d572b5739df6fcd7ac105433e1f56a6b111 Nov 21 20:29:28 crc kubenswrapper[4727]: W1121 20:29:28.623925 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4932d865_5b5e_481d_bf79_68f449c97745.slice/crio-7057cb6ee20265ab8a0c871356984ed521c7450726cb8543a4b6c6a2907e86c4 WatchSource:0}: Error finding container 7057cb6ee20265ab8a0c871356984ed521c7450726cb8543a4b6c6a2907e86c4: Status 404 returned error can't find the container with id 7057cb6ee20265ab8a0c871356984ed521c7450726cb8543a4b6c6a2907e86c4 Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.187018 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84b9f247-2a55-4a6d-9b15-6919248aac8d","Type":"ContainerStarted","Data":"08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe"} Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.187323 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84b9f247-2a55-4a6d-9b15-6919248aac8d","Type":"ContainerStarted","Data":"a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e"} Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.187334 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84b9f247-2a55-4a6d-9b15-6919248aac8d","Type":"ContainerStarted","Data":"023788f693efa5f6765b548abc4b5d572b5739df6fcd7ac105433e1f56a6b111"} Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.192301 4727 generic.go:334] "Generic (PLEG): container finished" podID="2a715021-194c-4f1e-9de8-00113677ca48" containerID="c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b" exitCode=0 Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.192328 4727 generic.go:334] "Generic (PLEG): container finished" podID="2a715021-194c-4f1e-9de8-00113677ca48" containerID="b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad" exitCode=2 Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.192335 4727 generic.go:334] "Generic (PLEG): container finished" podID="2a715021-194c-4f1e-9de8-00113677ca48" containerID="b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184" exitCode=0 Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.192397 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a715021-194c-4f1e-9de8-00113677ca48","Type":"ContainerDied","Data":"c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b"} Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.192423 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a715021-194c-4f1e-9de8-00113677ca48","Type":"ContainerDied","Data":"b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad"} Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.192434 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a715021-194c-4f1e-9de8-00113677ca48","Type":"ContainerDied","Data":"b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184"} Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.195023 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e586ddd-f8e8-4589-8b0f-bea576194a57","Type":"ContainerStarted","Data":"ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242"} Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.197097 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4932d865-5b5e-481d-bf79-68f449c97745","Type":"ContainerStarted","Data":"d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a"} Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.197120 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4932d865-5b5e-481d-bf79-68f449c97745","Type":"ContainerStarted","Data":"7057cb6ee20265ab8a0c871356984ed521c7450726cb8543a4b6c6a2907e86c4"} Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.218813 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.21879261 podStartE2EDuration="2.21879261s" podCreationTimestamp="2025-11-21 20:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:29.210328836 +0000 UTC m=+1374.396513880" watchObservedRunningTime="2025-11-21 20:29:29.21879261 +0000 UTC m=+1374.404977654" Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.237438 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.237417351 podStartE2EDuration="2.237417351s" podCreationTimestamp="2025-11-21 20:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:29.223469233 +0000 UTC m=+1374.409654277" watchObservedRunningTime="2025-11-21 20:29:29.237417351 +0000 UTC m=+1374.423602395" Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.477489 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 20:29:29 crc kubenswrapper[4727]: I1121 20:29:29.477543 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.914498 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.988306 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h877t\" (UniqueName: \"kubernetes.io/projected/2a715021-194c-4f1e-9de8-00113677ca48-kube-api-access-h877t\") pod \"2a715021-194c-4f1e-9de8-00113677ca48\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.988366 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-scripts\") pod \"2a715021-194c-4f1e-9de8-00113677ca48\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.988482 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-run-httpd\") pod \"2a715021-194c-4f1e-9de8-00113677ca48\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.988556 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-log-httpd\") pod \"2a715021-194c-4f1e-9de8-00113677ca48\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.988655 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-config-data\") pod \"2a715021-194c-4f1e-9de8-00113677ca48\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.988778 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-sg-core-conf-yaml\") pod \"2a715021-194c-4f1e-9de8-00113677ca48\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.988856 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-combined-ca-bundle\") pod \"2a715021-194c-4f1e-9de8-00113677ca48\" (UID: \"2a715021-194c-4f1e-9de8-00113677ca48\") " Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.989266 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2a715021-194c-4f1e-9de8-00113677ca48" (UID: "2a715021-194c-4f1e-9de8-00113677ca48"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.989559 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:30 crc kubenswrapper[4727]: I1121 20:29:30.989800 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2a715021-194c-4f1e-9de8-00113677ca48" (UID: "2a715021-194c-4f1e-9de8-00113677ca48"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.011634 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-scripts" (OuterVolumeSpecName: "scripts") pod "2a715021-194c-4f1e-9de8-00113677ca48" (UID: "2a715021-194c-4f1e-9de8-00113677ca48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.011832 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a715021-194c-4f1e-9de8-00113677ca48-kube-api-access-h877t" (OuterVolumeSpecName: "kube-api-access-h877t") pod "2a715021-194c-4f1e-9de8-00113677ca48" (UID: "2a715021-194c-4f1e-9de8-00113677ca48"). InnerVolumeSpecName "kube-api-access-h877t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.047536 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2a715021-194c-4f1e-9de8-00113677ca48" (UID: "2a715021-194c-4f1e-9de8-00113677ca48"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.096374 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.096412 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h877t\" (UniqueName: \"kubernetes.io/projected/2a715021-194c-4f1e-9de8-00113677ca48-kube-api-access-h877t\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.096430 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.096440 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a715021-194c-4f1e-9de8-00113677ca48-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.172774 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-config-data" (OuterVolumeSpecName: "config-data") pod "2a715021-194c-4f1e-9de8-00113677ca48" (UID: "2a715021-194c-4f1e-9de8-00113677ca48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.173544 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a715021-194c-4f1e-9de8-00113677ca48" (UID: "2a715021-194c-4f1e-9de8-00113677ca48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.199368 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.199405 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a715021-194c-4f1e-9de8-00113677ca48-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.220018 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e586ddd-f8e8-4589-8b0f-bea576194a57","Type":"ContainerStarted","Data":"6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515"} Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.222672 4727 generic.go:334] "Generic (PLEG): container finished" podID="2a715021-194c-4f1e-9de8-00113677ca48" containerID="ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018" exitCode=0 Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.222816 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a715021-194c-4f1e-9de8-00113677ca48","Type":"ContainerDied","Data":"ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018"} Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.222943 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2a715021-194c-4f1e-9de8-00113677ca48","Type":"ContainerDied","Data":"dc0bec0d2463912a07b37eee52037ddc6a8f1e1bf7146d106a4de87680e2813e"} Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.223090 4727 scope.go:117] "RemoveContainer" containerID="c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.223351 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.261779 4727 scope.go:117] "RemoveContainer" containerID="b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.285488 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-688b9f5b49-g49lf" podUID="4da2f3f3-a035-4d39-8756-6eab26cef3dc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.214:5353: i/o timeout" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.287951 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.305906 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.357018 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:31 crc kubenswrapper[4727]: E1121 20:29:31.357720 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="sg-core" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.357738 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="sg-core" Nov 21 20:29:31 crc kubenswrapper[4727]: E1121 20:29:31.357755 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="proxy-httpd" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.357764 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="proxy-httpd" Nov 21 20:29:31 crc kubenswrapper[4727]: E1121 20:29:31.357794 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="ceilometer-central-agent" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.357802 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="ceilometer-central-agent" Nov 21 20:29:31 crc kubenswrapper[4727]: E1121 20:29:31.357822 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="ceilometer-notification-agent" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.357830 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="ceilometer-notification-agent" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.358120 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="ceilometer-notification-agent" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.358137 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="ceilometer-central-agent" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.358151 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="proxy-httpd" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.358183 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a715021-194c-4f1e-9de8-00113677ca48" containerName="sg-core" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.366749 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.371230 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.371873 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.371949 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.510528 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-run-httpd\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.510777 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg749\" (UniqueName: \"kubernetes.io/projected/0e29c839-bc8b-40cf-94be-7e9053e9ede8-kube-api-access-tg749\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.510806 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-scripts\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.510860 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-log-httpd\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.510906 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.510922 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.510946 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-config-data\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.523973 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a715021-194c-4f1e-9de8-00113677ca48" path="/var/lib/kubelet/pods/2a715021-194c-4f1e-9de8-00113677ca48/volumes" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.613244 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-run-httpd\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.613291 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg749\" (UniqueName: \"kubernetes.io/projected/0e29c839-bc8b-40cf-94be-7e9053e9ede8-kube-api-access-tg749\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.613324 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-scripts\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.613412 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-log-httpd\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.613505 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.613526 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.613558 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-config-data\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.614623 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-log-httpd\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.615255 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-run-httpd\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.619147 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-scripts\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.619566 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-config-data\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.620191 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.620637 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.629877 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg749\" (UniqueName: \"kubernetes.io/projected/0e29c839-bc8b-40cf-94be-7e9053e9ede8-kube-api-access-tg749\") pod \"ceilometer-0\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.719700 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.727403 4727 scope.go:117] "RemoveContainer" containerID="ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.779435 4727 scope.go:117] "RemoveContainer" containerID="b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.804297 4727 scope.go:117] "RemoveContainer" containerID="c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b" Nov 21 20:29:31 crc kubenswrapper[4727]: E1121 20:29:31.804874 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b\": container with ID starting with c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b not found: ID does not exist" containerID="c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.804912 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b"} err="failed to get container status \"c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b\": rpc error: code = NotFound desc = could not find container \"c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b\": container with ID starting with c96da29292fff77404118763744ff55a140910b4a01e39746a9902e952d7366b not found: ID does not exist" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.804941 4727 scope.go:117] "RemoveContainer" containerID="b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad" Nov 21 20:29:31 crc kubenswrapper[4727]: E1121 20:29:31.805339 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad\": container with ID starting with b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad not found: ID does not exist" containerID="b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.805369 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad"} err="failed to get container status \"b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad\": rpc error: code = NotFound desc = could not find container \"b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad\": container with ID starting with b4f393f31e7579df6443a05ef618c5e3d9593ef7438e33d562c7a6d5a6a5ccad not found: ID does not exist" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.805389 4727 scope.go:117] "RemoveContainer" containerID="ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018" Nov 21 20:29:31 crc kubenswrapper[4727]: E1121 20:29:31.805809 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018\": container with ID starting with ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018 not found: ID does not exist" containerID="ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.805886 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018"} err="failed to get container status \"ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018\": rpc error: code = NotFound desc = could not find container \"ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018\": container with ID starting with ea3151f57b4f88c80bb5c1098d42f306a926d97ff8b0eeb1f46c7ea59d3e9018 not found: ID does not exist" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.805922 4727 scope.go:117] "RemoveContainer" containerID="b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184" Nov 21 20:29:31 crc kubenswrapper[4727]: E1121 20:29:31.806419 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184\": container with ID starting with b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184 not found: ID does not exist" containerID="b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184" Nov 21 20:29:31 crc kubenswrapper[4727]: I1121 20:29:31.806468 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184"} err="failed to get container status \"b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184\": rpc error: code = NotFound desc = could not find container \"b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184\": container with ID starting with b8d294518064649c865d12ff17f905e598aea9d34e0c4efe78c8dd552d04e184 not found: ID does not exist" Nov 21 20:29:32 crc kubenswrapper[4727]: I1121 20:29:32.206454 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:32 crc kubenswrapper[4727]: W1121 20:29:32.210590 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e29c839_bc8b_40cf_94be_7e9053e9ede8.slice/crio-47650541053c5fbd9807b22dd4faae27107af9cd63370fac702baeab019d8d3d WatchSource:0}: Error finding container 47650541053c5fbd9807b22dd4faae27107af9cd63370fac702baeab019d8d3d: Status 404 returned error can't find the container with id 47650541053c5fbd9807b22dd4faae27107af9cd63370fac702baeab019d8d3d Nov 21 20:29:32 crc kubenswrapper[4727]: I1121 20:29:32.237620 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e29c839-bc8b-40cf-94be-7e9053e9ede8","Type":"ContainerStarted","Data":"47650541053c5fbd9807b22dd4faae27107af9cd63370fac702baeab019d8d3d"} Nov 21 20:29:32 crc kubenswrapper[4727]: I1121 20:29:32.240324 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e586ddd-f8e8-4589-8b0f-bea576194a57","Type":"ContainerStarted","Data":"513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8"} Nov 21 20:29:32 crc kubenswrapper[4727]: I1121 20:29:32.240470 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-notifier" containerID="cri-o://6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515" gracePeriod=30 Nov 21 20:29:32 crc kubenswrapper[4727]: I1121 20:29:32.240493 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-evaluator" containerID="cri-o://ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242" gracePeriod=30 Nov 21 20:29:32 crc kubenswrapper[4727]: I1121 20:29:32.240476 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-listener" containerID="cri-o://513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8" gracePeriod=30 Nov 21 20:29:32 crc kubenswrapper[4727]: I1121 20:29:32.240454 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-api" containerID="cri-o://9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276" gracePeriod=30 Nov 21 20:29:32 crc kubenswrapper[4727]: I1121 20:29:32.273913 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.012273508 podStartE2EDuration="9.27387919s" podCreationTimestamp="2025-11-21 20:29:23 +0000 UTC" firstStartedPulling="2025-11-21 20:29:25.544438601 +0000 UTC m=+1370.730623645" lastFinishedPulling="2025-11-21 20:29:31.806044273 +0000 UTC m=+1376.992229327" observedRunningTime="2025-11-21 20:29:32.272664061 +0000 UTC m=+1377.458849115" watchObservedRunningTime="2025-11-21 20:29:32.27387919 +0000 UTC m=+1377.460064234" Nov 21 20:29:32 crc kubenswrapper[4727]: I1121 20:29:32.452178 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 21 20:29:32 crc kubenswrapper[4727]: I1121 20:29:32.586205 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 20:29:33 crc kubenswrapper[4727]: I1121 20:29:33.268314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e29c839-bc8b-40cf-94be-7e9053e9ede8","Type":"ContainerStarted","Data":"3b619352ea8ace0cea4088e26bc6509462456c535c633b4c2f8905093437c825"} Nov 21 20:29:33 crc kubenswrapper[4727]: I1121 20:29:33.270989 4727 generic.go:334] "Generic (PLEG): container finished" podID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerID="6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515" exitCode=0 Nov 21 20:29:33 crc kubenswrapper[4727]: I1121 20:29:33.271027 4727 generic.go:334] "Generic (PLEG): container finished" podID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerID="ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242" exitCode=0 Nov 21 20:29:33 crc kubenswrapper[4727]: I1121 20:29:33.271036 4727 generic.go:334] "Generic (PLEG): container finished" podID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerID="9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276" exitCode=0 Nov 21 20:29:33 crc kubenswrapper[4727]: I1121 20:29:33.271062 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e586ddd-f8e8-4589-8b0f-bea576194a57","Type":"ContainerDied","Data":"6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515"} Nov 21 20:29:33 crc kubenswrapper[4727]: I1121 20:29:33.271092 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e586ddd-f8e8-4589-8b0f-bea576194a57","Type":"ContainerDied","Data":"ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242"} Nov 21 20:29:33 crc kubenswrapper[4727]: I1121 20:29:33.271110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e586ddd-f8e8-4589-8b0f-bea576194a57","Type":"ContainerDied","Data":"9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276"} Nov 21 20:29:34 crc kubenswrapper[4727]: I1121 20:29:34.289864 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e29c839-bc8b-40cf-94be-7e9053e9ede8","Type":"ContainerStarted","Data":"48889b1dbdf1a3a99d4cde4a1f5c2f575e75f7c42e9fe45555183c45603914c1"} Nov 21 20:29:34 crc kubenswrapper[4727]: I1121 20:29:34.478276 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 20:29:34 crc kubenswrapper[4727]: I1121 20:29:34.478307 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 20:29:35 crc kubenswrapper[4727]: I1121 20:29:35.307938 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e29c839-bc8b-40cf-94be-7e9053e9ede8","Type":"ContainerStarted","Data":"fd05f7eb1f6eed8cf7f3017994748be06c786ab6fb51d158bd79e29ea5581570"} Nov 21 20:29:35 crc kubenswrapper[4727]: I1121 20:29:35.492117 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 20:29:35 crc kubenswrapper[4727]: I1121 20:29:35.492153 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 20:29:36 crc kubenswrapper[4727]: I1121 20:29:36.321462 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e29c839-bc8b-40cf-94be-7e9053e9ede8","Type":"ContainerStarted","Data":"74df78d7f489350151159d9aa5f757eb3590739f130066707581b3810f391bed"} Nov 21 20:29:36 crc kubenswrapper[4727]: I1121 20:29:36.321812 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 20:29:36 crc kubenswrapper[4727]: I1121 20:29:36.365498 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.004655716 podStartE2EDuration="5.365482922s" podCreationTimestamp="2025-11-21 20:29:31 +0000 UTC" firstStartedPulling="2025-11-21 20:29:32.214591025 +0000 UTC m=+1377.400776069" lastFinishedPulling="2025-11-21 20:29:35.575418231 +0000 UTC m=+1380.761603275" observedRunningTime="2025-11-21 20:29:36.361801642 +0000 UTC m=+1381.547986706" watchObservedRunningTime="2025-11-21 20:29:36.365482922 +0000 UTC m=+1381.551667966" Nov 21 20:29:37 crc kubenswrapper[4727]: I1121 20:29:37.586012 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 20:29:37 crc kubenswrapper[4727]: I1121 20:29:37.634170 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 20:29:37 crc kubenswrapper[4727]: I1121 20:29:37.699813 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 20:29:37 crc kubenswrapper[4727]: I1121 20:29:37.699860 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 20:29:38 crc kubenswrapper[4727]: I1121 20:29:38.374343 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 20:29:38 crc kubenswrapper[4727]: I1121 20:29:38.741237 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 20:29:38 crc kubenswrapper[4727]: I1121 20:29:38.741292 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.247:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.382839 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8bx8g"] Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.386594 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.403705 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bx8g"] Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.504882 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nnx\" (UniqueName: \"kubernetes.io/projected/531d4889-3af4-4aca-b3ec-8bfa16484899-kube-api-access-m5nnx\") pod \"redhat-marketplace-8bx8g\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.505139 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-utilities\") pod \"redhat-marketplace-8bx8g\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.505202 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-catalog-content\") pod \"redhat-marketplace-8bx8g\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.608664 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-utilities\") pod \"redhat-marketplace-8bx8g\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.608783 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-catalog-content\") pod \"redhat-marketplace-8bx8g\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.609121 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nnx\" (UniqueName: \"kubernetes.io/projected/531d4889-3af4-4aca-b3ec-8bfa16484899-kube-api-access-m5nnx\") pod \"redhat-marketplace-8bx8g\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.609468 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-utilities\") pod \"redhat-marketplace-8bx8g\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.610574 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-catalog-content\") pod \"redhat-marketplace-8bx8g\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.637894 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nnx\" (UniqueName: \"kubernetes.io/projected/531d4889-3af4-4aca-b3ec-8bfa16484899-kube-api-access-m5nnx\") pod \"redhat-marketplace-8bx8g\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:42 crc kubenswrapper[4727]: I1121 20:29:42.709774 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:43 crc kubenswrapper[4727]: I1121 20:29:43.173724 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bx8g"] Nov 21 20:29:43 crc kubenswrapper[4727]: I1121 20:29:43.337148 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:29:43 crc kubenswrapper[4727]: I1121 20:29:43.337221 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:29:43 crc kubenswrapper[4727]: I1121 20:29:43.396895 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bx8g" event={"ID":"531d4889-3af4-4aca-b3ec-8bfa16484899","Type":"ContainerStarted","Data":"4b8df119d59cb4d48e8e245090d733872921ec9b0c1ca6d03698d6224b585604"} Nov 21 20:29:44 crc kubenswrapper[4727]: I1121 20:29:44.411184 4727 generic.go:334] "Generic (PLEG): container finished" podID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerID="b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9" exitCode=0 Nov 21 20:29:44 crc kubenswrapper[4727]: I1121 20:29:44.411229 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bx8g" event={"ID":"531d4889-3af4-4aca-b3ec-8bfa16484899","Type":"ContainerDied","Data":"b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9"} Nov 21 20:29:44 crc kubenswrapper[4727]: I1121 20:29:44.483784 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 20:29:44 crc kubenswrapper[4727]: I1121 20:29:44.487749 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 20:29:44 crc kubenswrapper[4727]: I1121 20:29:44.490173 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 20:29:45 crc kubenswrapper[4727]: I1121 20:29:45.422798 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bx8g" event={"ID":"531d4889-3af4-4aca-b3ec-8bfa16484899","Type":"ContainerStarted","Data":"8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4"} Nov 21 20:29:45 crc kubenswrapper[4727]: I1121 20:29:45.430671 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 20:29:46 crc kubenswrapper[4727]: I1121 20:29:46.436482 4727 generic.go:334] "Generic (PLEG): container finished" podID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerID="8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4" exitCode=0 Nov 21 20:29:46 crc kubenswrapper[4727]: I1121 20:29:46.436569 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bx8g" event={"ID":"531d4889-3af4-4aca-b3ec-8bfa16484899","Type":"ContainerDied","Data":"8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4"} Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.338330 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.447351 4727 generic.go:334] "Generic (PLEG): container finished" podID="60ea202f-6cf5-4e18-b267-1ea18cd187fd" containerID="156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a" exitCode=137 Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.447413 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60ea202f-6cf5-4e18-b267-1ea18cd187fd","Type":"ContainerDied","Data":"156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a"} Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.447485 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60ea202f-6cf5-4e18-b267-1ea18cd187fd","Type":"ContainerDied","Data":"d63e833131c02877d666ea245caac0748c4038961a6f249d4300f793e5ae5953"} Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.447505 4727 scope.go:117] "RemoveContainer" containerID="156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.447638 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.448802 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm5z7\" (UniqueName: \"kubernetes.io/projected/60ea202f-6cf5-4e18-b267-1ea18cd187fd-kube-api-access-xm5z7\") pod \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.449118 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-config-data\") pod \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.449270 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-combined-ca-bundle\") pod \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\" (UID: \"60ea202f-6cf5-4e18-b267-1ea18cd187fd\") " Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.456992 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ea202f-6cf5-4e18-b267-1ea18cd187fd-kube-api-access-xm5z7" (OuterVolumeSpecName: "kube-api-access-xm5z7") pod "60ea202f-6cf5-4e18-b267-1ea18cd187fd" (UID: "60ea202f-6cf5-4e18-b267-1ea18cd187fd"). InnerVolumeSpecName "kube-api-access-xm5z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.459163 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bx8g" event={"ID":"531d4889-3af4-4aca-b3ec-8bfa16484899","Type":"ContainerStarted","Data":"23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6"} Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.483453 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8bx8g" podStartSLOduration=2.8462068609999998 podStartE2EDuration="5.483434394s" podCreationTimestamp="2025-11-21 20:29:42 +0000 UTC" firstStartedPulling="2025-11-21 20:29:44.414288944 +0000 UTC m=+1389.600474008" lastFinishedPulling="2025-11-21 20:29:47.051516487 +0000 UTC m=+1392.237701541" observedRunningTime="2025-11-21 20:29:47.474693933 +0000 UTC m=+1392.660878987" watchObservedRunningTime="2025-11-21 20:29:47.483434394 +0000 UTC m=+1392.669619438" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.495554 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-config-data" (OuterVolumeSpecName: "config-data") pod "60ea202f-6cf5-4e18-b267-1ea18cd187fd" (UID: "60ea202f-6cf5-4e18-b267-1ea18cd187fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.495989 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60ea202f-6cf5-4e18-b267-1ea18cd187fd" (UID: "60ea202f-6cf5-4e18-b267-1ea18cd187fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.540852 4727 scope.go:117] "RemoveContainer" containerID="156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a" Nov 21 20:29:47 crc kubenswrapper[4727]: E1121 20:29:47.541545 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a\": container with ID starting with 156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a not found: ID does not exist" containerID="156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.541578 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a"} err="failed to get container status \"156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a\": rpc error: code = NotFound desc = could not find container \"156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a\": container with ID starting with 156b5251b04842a68bbacfb3c504ed5c1cacec8f94d32933c5ed7c99693f518a not found: ID does not exist" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.552677 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm5z7\" (UniqueName: \"kubernetes.io/projected/60ea202f-6cf5-4e18-b267-1ea18cd187fd-kube-api-access-xm5z7\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.552703 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.552713 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ea202f-6cf5-4e18-b267-1ea18cd187fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.703817 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.703911 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.704399 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.704448 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.706546 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.707105 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.830672 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.904024 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.921119 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 20:29:47 crc kubenswrapper[4727]: E1121 20:29:47.922723 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ea202f-6cf5-4e18-b267-1ea18cd187fd" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.922762 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ea202f-6cf5-4e18-b267-1ea18cd187fd" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.923209 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ea202f-6cf5-4e18-b267-1ea18cd187fd" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.924376 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.928992 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.934377 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.934624 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.935161 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.947725 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dmspm"] Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.949859 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:47 crc kubenswrapper[4727]: I1121 20:29:47.962379 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dmspm"] Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.081614 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-config\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.081710 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.081757 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.081817 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.081854 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77gpz\" (UniqueName: \"kubernetes.io/projected/4a519050-c5ec-4a64-9280-65e6b3f299b8-kube-api-access-77gpz\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.081880 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.081905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.081934 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.081981 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rk9r\" (UniqueName: \"kubernetes.io/projected/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-kube-api-access-7rk9r\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.082007 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.082027 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185014 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-config\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185332 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185436 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185463 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77gpz\" (UniqueName: \"kubernetes.io/projected/4a519050-c5ec-4a64-9280-65e6b3f299b8-kube-api-access-77gpz\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185487 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185515 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185549 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rk9r\" (UniqueName: \"kubernetes.io/projected/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-kube-api-access-7rk9r\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185621 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.185651 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.186407 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-config\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.186669 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.186843 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.187571 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.191706 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.191745 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.195269 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.202636 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.203472 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77gpz\" (UniqueName: \"kubernetes.io/projected/4a519050-c5ec-4a64-9280-65e6b3f299b8-kube-api-access-77gpz\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.203683 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a519050-c5ec-4a64-9280-65e6b3f299b8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4a519050-c5ec-4a64-9280-65e6b3f299b8\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.206326 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rk9r\" (UniqueName: \"kubernetes.io/projected/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-kube-api-access-7rk9r\") pod \"dnsmasq-dns-f84f9ccf-dmspm\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.284870 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.301128 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:48 crc kubenswrapper[4727]: W1121 20:29:48.933307 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ae1103e_a174_4ed8_9f4b_726eb2198bb7.slice/crio-ab1442bf8ee47f6bed6f9f9f613f3c8d83a7f25d74b43d81b2123047fe06dd3b WatchSource:0}: Error finding container ab1442bf8ee47f6bed6f9f9f613f3c8d83a7f25d74b43d81b2123047fe06dd3b: Status 404 returned error can't find the container with id ab1442bf8ee47f6bed6f9f9f613f3c8d83a7f25d74b43d81b2123047fe06dd3b Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.951642 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dmspm"] Nov 21 20:29:48 crc kubenswrapper[4727]: I1121 20:29:48.975893 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 20:29:49 crc kubenswrapper[4727]: I1121 20:29:49.534328 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ea202f-6cf5-4e18-b267-1ea18cd187fd" path="/var/lib/kubelet/pods/60ea202f-6cf5-4e18-b267-1ea18cd187fd/volumes" Nov 21 20:29:49 crc kubenswrapper[4727]: I1121 20:29:49.535826 4727 generic.go:334] "Generic (PLEG): container finished" podID="8ae1103e-a174-4ed8-9f4b-726eb2198bb7" containerID="97ad06ae799108662d96447ce658da9250c0e6a78b7c4718b0110f9b5785ed53" exitCode=0 Nov 21 20:29:49 crc kubenswrapper[4727]: I1121 20:29:49.535930 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" event={"ID":"8ae1103e-a174-4ed8-9f4b-726eb2198bb7","Type":"ContainerDied","Data":"97ad06ae799108662d96447ce658da9250c0e6a78b7c4718b0110f9b5785ed53"} Nov 21 20:29:49 crc kubenswrapper[4727]: I1121 20:29:49.536014 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" event={"ID":"8ae1103e-a174-4ed8-9f4b-726eb2198bb7","Type":"ContainerStarted","Data":"ab1442bf8ee47f6bed6f9f9f613f3c8d83a7f25d74b43d81b2123047fe06dd3b"} Nov 21 20:29:49 crc kubenswrapper[4727]: I1121 20:29:49.540384 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4a519050-c5ec-4a64-9280-65e6b3f299b8","Type":"ContainerStarted","Data":"1bc43d211b8a3bb2984cae0c201b17d4c402784034040d3df32b1e3f71096102"} Nov 21 20:29:49 crc kubenswrapper[4727]: I1121 20:29:49.540531 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4a519050-c5ec-4a64-9280-65e6b3f299b8","Type":"ContainerStarted","Data":"8e5866bb4ac76c99c6c72a6dce77d5c34bc7f7fa0c0cf8d521197d79869ac889"} Nov 21 20:29:49 crc kubenswrapper[4727]: I1121 20:29:49.598496 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.598476245 podStartE2EDuration="2.598476245s" podCreationTimestamp="2025-11-21 20:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:49.592337137 +0000 UTC m=+1394.778522181" watchObservedRunningTime="2025-11-21 20:29:49.598476245 +0000 UTC m=+1394.784661289" Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.580053 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" event={"ID":"8ae1103e-a174-4ed8-9f4b-726eb2198bb7","Type":"ContainerStarted","Data":"fe9a61ebed3ed86566df9afed4b9fc9b75c3c6e801f108da3b00dac2c1550a72"} Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.581478 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.584053 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.584392 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerName="nova-api-log" containerID="cri-o://a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e" gracePeriod=30 Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.584629 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerName="nova-api-api" containerID="cri-o://08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe" gracePeriod=30 Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.622238 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" podStartSLOduration=3.6222156180000002 podStartE2EDuration="3.622215618s" podCreationTimestamp="2025-11-21 20:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:50.612472152 +0000 UTC m=+1395.798657186" watchObservedRunningTime="2025-11-21 20:29:50.622215618 +0000 UTC m=+1395.808400652" Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.723211 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.723689 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="ceilometer-central-agent" containerID="cri-o://3b619352ea8ace0cea4088e26bc6509462456c535c633b4c2f8905093437c825" gracePeriod=30 Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.723850 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="ceilometer-notification-agent" containerID="cri-o://48889b1dbdf1a3a99d4cde4a1f5c2f575e75f7c42e9fe45555183c45603914c1" gracePeriod=30 Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.723917 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="proxy-httpd" containerID="cri-o://74df78d7f489350151159d9aa5f757eb3590739f130066707581b3810f391bed" gracePeriod=30 Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.723809 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="sg-core" containerID="cri-o://fd05f7eb1f6eed8cf7f3017994748be06c786ab6fb51d158bd79e29ea5581570" gracePeriod=30 Nov 21 20:29:50 crc kubenswrapper[4727]: I1121 20:29:50.741003 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.248:3000/\": read tcp 10.217.0.2:53756->10.217.0.248:3000: read: connection reset by peer" Nov 21 20:29:51 crc kubenswrapper[4727]: I1121 20:29:51.594334 4727 generic.go:334] "Generic (PLEG): container finished" podID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerID="74df78d7f489350151159d9aa5f757eb3590739f130066707581b3810f391bed" exitCode=0 Nov 21 20:29:51 crc kubenswrapper[4727]: I1121 20:29:51.594504 4727 generic.go:334] "Generic (PLEG): container finished" podID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerID="fd05f7eb1f6eed8cf7f3017994748be06c786ab6fb51d158bd79e29ea5581570" exitCode=2 Nov 21 20:29:51 crc kubenswrapper[4727]: I1121 20:29:51.594514 4727 generic.go:334] "Generic (PLEG): container finished" podID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerID="3b619352ea8ace0cea4088e26bc6509462456c535c633b4c2f8905093437c825" exitCode=0 Nov 21 20:29:51 crc kubenswrapper[4727]: I1121 20:29:51.594586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e29c839-bc8b-40cf-94be-7e9053e9ede8","Type":"ContainerDied","Data":"74df78d7f489350151159d9aa5f757eb3590739f130066707581b3810f391bed"} Nov 21 20:29:51 crc kubenswrapper[4727]: I1121 20:29:51.594615 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e29c839-bc8b-40cf-94be-7e9053e9ede8","Type":"ContainerDied","Data":"fd05f7eb1f6eed8cf7f3017994748be06c786ab6fb51d158bd79e29ea5581570"} Nov 21 20:29:51 crc kubenswrapper[4727]: I1121 20:29:51.594625 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e29c839-bc8b-40cf-94be-7e9053e9ede8","Type":"ContainerDied","Data":"3b619352ea8ace0cea4088e26bc6509462456c535c633b4c2f8905093437c825"} Nov 21 20:29:51 crc kubenswrapper[4727]: I1121 20:29:51.598605 4727 generic.go:334] "Generic (PLEG): container finished" podID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerID="a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e" exitCode=143 Nov 21 20:29:51 crc kubenswrapper[4727]: I1121 20:29:51.598919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84b9f247-2a55-4a6d-9b15-6919248aac8d","Type":"ContainerDied","Data":"a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e"} Nov 21 20:29:52 crc kubenswrapper[4727]: I1121 20:29:52.710463 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:52 crc kubenswrapper[4727]: I1121 20:29:52.710802 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:52 crc kubenswrapper[4727]: I1121 20:29:52.759092 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:53 crc kubenswrapper[4727]: I1121 20:29:53.302372 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:53 crc kubenswrapper[4727]: I1121 20:29:53.703683 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:53 crc kubenswrapper[4727]: I1121 20:29:53.766369 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bx8g"] Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.270833 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.462012 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-combined-ca-bundle\") pod \"84b9f247-2a55-4a6d-9b15-6919248aac8d\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.462359 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfbv8\" (UniqueName: \"kubernetes.io/projected/84b9f247-2a55-4a6d-9b15-6919248aac8d-kube-api-access-qfbv8\") pod \"84b9f247-2a55-4a6d-9b15-6919248aac8d\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.462385 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-config-data\") pod \"84b9f247-2a55-4a6d-9b15-6919248aac8d\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.462471 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b9f247-2a55-4a6d-9b15-6919248aac8d-logs\") pod \"84b9f247-2a55-4a6d-9b15-6919248aac8d\" (UID: \"84b9f247-2a55-4a6d-9b15-6919248aac8d\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.463306 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84b9f247-2a55-4a6d-9b15-6919248aac8d-logs" (OuterVolumeSpecName: "logs") pod "84b9f247-2a55-4a6d-9b15-6919248aac8d" (UID: "84b9f247-2a55-4a6d-9b15-6919248aac8d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.470208 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b9f247-2a55-4a6d-9b15-6919248aac8d-kube-api-access-qfbv8" (OuterVolumeSpecName: "kube-api-access-qfbv8") pod "84b9f247-2a55-4a6d-9b15-6919248aac8d" (UID: "84b9f247-2a55-4a6d-9b15-6919248aac8d"). InnerVolumeSpecName "kube-api-access-qfbv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.496575 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-config-data" (OuterVolumeSpecName: "config-data") pod "84b9f247-2a55-4a6d-9b15-6919248aac8d" (UID: "84b9f247-2a55-4a6d-9b15-6919248aac8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.526403 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84b9f247-2a55-4a6d-9b15-6919248aac8d" (UID: "84b9f247-2a55-4a6d-9b15-6919248aac8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.566370 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfbv8\" (UniqueName: \"kubernetes.io/projected/84b9f247-2a55-4a6d-9b15-6919248aac8d-kube-api-access-qfbv8\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.566404 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.566423 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b9f247-2a55-4a6d-9b15-6919248aac8d-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.566432 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b9f247-2a55-4a6d-9b15-6919248aac8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.643428 4727 generic.go:334] "Generic (PLEG): container finished" podID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerID="48889b1dbdf1a3a99d4cde4a1f5c2f575e75f7c42e9fe45555183c45603914c1" exitCode=0 Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.643573 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e29c839-bc8b-40cf-94be-7e9053e9ede8","Type":"ContainerDied","Data":"48889b1dbdf1a3a99d4cde4a1f5c2f575e75f7c42e9fe45555183c45603914c1"} Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.650498 4727 generic.go:334] "Generic (PLEG): container finished" podID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerID="08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe" exitCode=0 Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.650607 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84b9f247-2a55-4a6d-9b15-6919248aac8d","Type":"ContainerDied","Data":"08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe"} Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.650589 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.650678 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84b9f247-2a55-4a6d-9b15-6919248aac8d","Type":"ContainerDied","Data":"023788f693efa5f6765b548abc4b5d572b5739df6fcd7ac105433e1f56a6b111"} Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.650704 4727 scope.go:117] "RemoveContainer" containerID="08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.681204 4727 scope.go:117] "RemoveContainer" containerID="a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.697620 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.716867 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.723469 4727 scope.go:117] "RemoveContainer" containerID="08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe" Nov 21 20:29:54 crc kubenswrapper[4727]: E1121 20:29:54.724226 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe\": container with ID starting with 08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe not found: ID does not exist" containerID="08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.724306 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe"} err="failed to get container status \"08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe\": rpc error: code = NotFound desc = could not find container \"08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe\": container with ID starting with 08aef35b9fe94772ab1b331f12520da4e3b8687d0bd927769924f9ab3ce36ebe not found: ID does not exist" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.724407 4727 scope.go:117] "RemoveContainer" containerID="a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e" Nov 21 20:29:54 crc kubenswrapper[4727]: E1121 20:29:54.725611 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e\": container with ID starting with a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e not found: ID does not exist" containerID="a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.725862 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e"} err="failed to get container status \"a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e\": rpc error: code = NotFound desc = could not find container \"a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e\": container with ID starting with a53d29bc083364fd9a67ceb97c0631e64487116260554eda3a8f95fcf8d73e5e not found: ID does not exist" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.727101 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.728613 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:54 crc kubenswrapper[4727]: E1121 20:29:54.729177 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerName="nova-api-api" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.729257 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerName="nova-api-api" Nov 21 20:29:54 crc kubenswrapper[4727]: E1121 20:29:54.729319 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="proxy-httpd" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.729364 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="proxy-httpd" Nov 21 20:29:54 crc kubenswrapper[4727]: E1121 20:29:54.729432 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerName="nova-api-log" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.730227 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerName="nova-api-log" Nov 21 20:29:54 crc kubenswrapper[4727]: E1121 20:29:54.730596 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="ceilometer-notification-agent" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.730658 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="ceilometer-notification-agent" Nov 21 20:29:54 crc kubenswrapper[4727]: E1121 20:29:54.730717 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="ceilometer-central-agent" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.730762 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="ceilometer-central-agent" Nov 21 20:29:54 crc kubenswrapper[4727]: E1121 20:29:54.730824 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="sg-core" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.730870 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="sg-core" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.731184 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="ceilometer-central-agent" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.738385 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="sg-core" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.738442 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="proxy-httpd" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.738462 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerName="nova-api-api" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.738504 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" containerName="nova-api-log" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.738538 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" containerName="ceilometer-notification-agent" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.740343 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.740556 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.743312 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.743526 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.744342 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.774291 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcf7k\" (UniqueName: \"kubernetes.io/projected/05dc461f-0e12-42c3-9cab-44bdb182c333-kube-api-access-qcf7k\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.774376 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-internal-tls-certs\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.774640 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-config-data\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.774727 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-public-tls-certs\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.774896 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05dc461f-0e12-42c3-9cab-44bdb182c333-logs\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.774929 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.876713 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-combined-ca-bundle\") pod \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.877059 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-config-data\") pod \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.877085 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-run-httpd\") pod \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.877117 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-sg-core-conf-yaml\") pod \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.877169 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-scripts\") pod \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.877339 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-log-httpd\") pod \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.877574 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg749\" (UniqueName: \"kubernetes.io/projected/0e29c839-bc8b-40cf-94be-7e9053e9ede8-kube-api-access-tg749\") pod \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\" (UID: \"0e29c839-bc8b-40cf-94be-7e9053e9ede8\") " Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.877819 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0e29c839-bc8b-40cf-94be-7e9053e9ede8" (UID: "0e29c839-bc8b-40cf-94be-7e9053e9ede8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.877941 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcf7k\" (UniqueName: \"kubernetes.io/projected/05dc461f-0e12-42c3-9cab-44bdb182c333-kube-api-access-qcf7k\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.878066 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-internal-tls-certs\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.878117 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0e29c839-bc8b-40cf-94be-7e9053e9ede8" (UID: "0e29c839-bc8b-40cf-94be-7e9053e9ede8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.878299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-config-data\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.878599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-public-tls-certs\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.878819 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05dc461f-0e12-42c3-9cab-44bdb182c333-logs\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.878848 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.878999 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.879018 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e29c839-bc8b-40cf-94be-7e9053e9ede8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.879464 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05dc461f-0e12-42c3-9cab-44bdb182c333-logs\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.882785 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-scripts" (OuterVolumeSpecName: "scripts") pod "0e29c839-bc8b-40cf-94be-7e9053e9ede8" (UID: "0e29c839-bc8b-40cf-94be-7e9053e9ede8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.883674 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-internal-tls-certs\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.884052 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-public-tls-certs\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.884299 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.885476 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-config-data\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.885521 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e29c839-bc8b-40cf-94be-7e9053e9ede8-kube-api-access-tg749" (OuterVolumeSpecName: "kube-api-access-tg749") pod "0e29c839-bc8b-40cf-94be-7e9053e9ede8" (UID: "0e29c839-bc8b-40cf-94be-7e9053e9ede8"). InnerVolumeSpecName "kube-api-access-tg749". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.897987 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcf7k\" (UniqueName: \"kubernetes.io/projected/05dc461f-0e12-42c3-9cab-44bdb182c333-kube-api-access-qcf7k\") pod \"nova-api-0\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " pod="openstack/nova-api-0" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.917709 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0e29c839-bc8b-40cf-94be-7e9053e9ede8" (UID: "0e29c839-bc8b-40cf-94be-7e9053e9ede8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.980342 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.980376 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.980389 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg749\" (UniqueName: \"kubernetes.io/projected/0e29c839-bc8b-40cf-94be-7e9053e9ede8-kube-api-access-tg749\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:54 crc kubenswrapper[4727]: I1121 20:29:54.984187 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e29c839-bc8b-40cf-94be-7e9053e9ede8" (UID: "0e29c839-bc8b-40cf-94be-7e9053e9ede8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.031717 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-config-data" (OuterVolumeSpecName: "config-data") pod "0e29c839-bc8b-40cf-94be-7e9053e9ede8" (UID: "0e29c839-bc8b-40cf-94be-7e9053e9ede8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.067829 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.082506 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.082562 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e29c839-bc8b-40cf-94be-7e9053e9ede8-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.511457 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b9f247-2a55-4a6d-9b15-6919248aac8d" path="/var/lib/kubelet/pods/84b9f247-2a55-4a6d-9b15-6919248aac8d/volumes" Nov 21 20:29:55 crc kubenswrapper[4727]: W1121 20:29:55.555688 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05dc461f_0e12_42c3_9cab_44bdb182c333.slice/crio-85056f25e1739bef3006e5a189b0b10e7917b1760c31547b73903aec0a7e7551 WatchSource:0}: Error finding container 85056f25e1739bef3006e5a189b0b10e7917b1760c31547b73903aec0a7e7551: Status 404 returned error can't find the container with id 85056f25e1739bef3006e5a189b0b10e7917b1760c31547b73903aec0a7e7551 Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.556938 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.664191 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e29c839-bc8b-40cf-94be-7e9053e9ede8","Type":"ContainerDied","Data":"47650541053c5fbd9807b22dd4faae27107af9cd63370fac702baeab019d8d3d"} Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.664250 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.664623 4727 scope.go:117] "RemoveContainer" containerID="74df78d7f489350151159d9aa5f757eb3590739f130066707581b3810f391bed" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.666662 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"05dc461f-0e12-42c3-9cab-44bdb182c333","Type":"ContainerStarted","Data":"85056f25e1739bef3006e5a189b0b10e7917b1760c31547b73903aec0a7e7551"} Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.668623 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8bx8g" podUID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerName="registry-server" containerID="cri-o://23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6" gracePeriod=2 Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.700530 4727 scope.go:117] "RemoveContainer" containerID="fd05f7eb1f6eed8cf7f3017994748be06c786ab6fb51d158bd79e29ea5581570" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.705472 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.714388 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.742938 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.754217 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.757892 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.758208 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.774255 4727 scope.go:117] "RemoveContainer" containerID="48889b1dbdf1a3a99d4cde4a1f5c2f575e75f7c42e9fe45555183c45603914c1" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.811727 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.819058 4727 scope.go:117] "RemoveContainer" containerID="3b619352ea8ace0cea4088e26bc6509462456c535c633b4c2f8905093437c825" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.907933 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbmhw\" (UniqueName: \"kubernetes.io/projected/501fa221-2089-47bd-999a-510bc80b25d6-kube-api-access-mbmhw\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.908016 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-log-httpd\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.908162 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.908199 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-run-httpd\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.908244 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-scripts\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.908288 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-config-data\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:55 crc kubenswrapper[4727]: I1121 20:29:55.908362 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.010668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.011115 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-run-httpd\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.011167 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-scripts\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.011215 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-config-data\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.011281 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.011372 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbmhw\" (UniqueName: \"kubernetes.io/projected/501fa221-2089-47bd-999a-510bc80b25d6-kube-api-access-mbmhw\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.011404 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-log-httpd\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.014820 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.015154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-log-httpd\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.015174 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-run-httpd\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.027430 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.028301 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-config-data\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.030155 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbmhw\" (UniqueName: \"kubernetes.io/projected/501fa221-2089-47bd-999a-510bc80b25d6-kube-api-access-mbmhw\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.041870 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-scripts\") pod \"ceilometer-0\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.085287 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.153220 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.319044 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-catalog-content\") pod \"531d4889-3af4-4aca-b3ec-8bfa16484899\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.320950 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-utilities" (OuterVolumeSpecName: "utilities") pod "531d4889-3af4-4aca-b3ec-8bfa16484899" (UID: "531d4889-3af4-4aca-b3ec-8bfa16484899"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.321121 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-utilities\") pod \"531d4889-3af4-4aca-b3ec-8bfa16484899\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.321160 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5nnx\" (UniqueName: \"kubernetes.io/projected/531d4889-3af4-4aca-b3ec-8bfa16484899-kube-api-access-m5nnx\") pod \"531d4889-3af4-4aca-b3ec-8bfa16484899\" (UID: \"531d4889-3af4-4aca-b3ec-8bfa16484899\") " Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.324342 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.335838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531d4889-3af4-4aca-b3ec-8bfa16484899-kube-api-access-m5nnx" (OuterVolumeSpecName: "kube-api-access-m5nnx") pod "531d4889-3af4-4aca-b3ec-8bfa16484899" (UID: "531d4889-3af4-4aca-b3ec-8bfa16484899"). InnerVolumeSpecName "kube-api-access-m5nnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.349152 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "531d4889-3af4-4aca-b3ec-8bfa16484899" (UID: "531d4889-3af4-4aca-b3ec-8bfa16484899"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.439443 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5nnx\" (UniqueName: \"kubernetes.io/projected/531d4889-3af4-4aca-b3ec-8bfa16484899-kube-api-access-m5nnx\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.439476 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/531d4889-3af4-4aca-b3ec-8bfa16484899-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.611695 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:29:56 crc kubenswrapper[4727]: W1121 20:29:56.613421 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod501fa221_2089_47bd_999a_510bc80b25d6.slice/crio-0871e5e26f0d0cbbe6ab553b4b09b2111cf5786b46e48f9a2c4ef99b5df429ea WatchSource:0}: Error finding container 0871e5e26f0d0cbbe6ab553b4b09b2111cf5786b46e48f9a2c4ef99b5df429ea: Status 404 returned error can't find the container with id 0871e5e26f0d0cbbe6ab553b4b09b2111cf5786b46e48f9a2c4ef99b5df429ea Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.681634 4727 generic.go:334] "Generic (PLEG): container finished" podID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerID="23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6" exitCode=0 Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.681707 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bx8g" event={"ID":"531d4889-3af4-4aca-b3ec-8bfa16484899","Type":"ContainerDied","Data":"23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6"} Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.681770 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bx8g" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.681802 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bx8g" event={"ID":"531d4889-3af4-4aca-b3ec-8bfa16484899","Type":"ContainerDied","Data":"4b8df119d59cb4d48e8e245090d733872921ec9b0c1ca6d03698d6224b585604"} Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.681843 4727 scope.go:117] "RemoveContainer" containerID="23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.687548 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"05dc461f-0e12-42c3-9cab-44bdb182c333","Type":"ContainerStarted","Data":"cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013"} Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.687815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"05dc461f-0e12-42c3-9cab-44bdb182c333","Type":"ContainerStarted","Data":"2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0"} Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.689255 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"501fa221-2089-47bd-999a-510bc80b25d6","Type":"ContainerStarted","Data":"0871e5e26f0d0cbbe6ab553b4b09b2111cf5786b46e48f9a2c4ef99b5df429ea"} Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.714066 4727 scope.go:117] "RemoveContainer" containerID="8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.730928 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.730910342 podStartE2EDuration="2.730910342s" podCreationTimestamp="2025-11-21 20:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:29:56.719586578 +0000 UTC m=+1401.905771632" watchObservedRunningTime="2025-11-21 20:29:56.730910342 +0000 UTC m=+1401.917095386" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.751657 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bx8g"] Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.758080 4727 scope.go:117] "RemoveContainer" containerID="b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.762028 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bx8g"] Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.783986 4727 scope.go:117] "RemoveContainer" containerID="23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6" Nov 21 20:29:56 crc kubenswrapper[4727]: E1121 20:29:56.784559 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6\": container with ID starting with 23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6 not found: ID does not exist" containerID="23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.784646 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6"} err="failed to get container status \"23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6\": rpc error: code = NotFound desc = could not find container \"23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6\": container with ID starting with 23391f8e25aaf1a710d4f755b1baac872767584cb3c7e1f714ddcee098a950b6 not found: ID does not exist" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.784681 4727 scope.go:117] "RemoveContainer" containerID="8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4" Nov 21 20:29:56 crc kubenswrapper[4727]: E1121 20:29:56.785241 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4\": container with ID starting with 8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4 not found: ID does not exist" containerID="8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.785288 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4"} err="failed to get container status \"8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4\": rpc error: code = NotFound desc = could not find container \"8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4\": container with ID starting with 8021bb932caaa09116bc21c9fa880824710dcebe7394a6b820a9b3692efbcad4 not found: ID does not exist" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.785325 4727 scope.go:117] "RemoveContainer" containerID="b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9" Nov 21 20:29:56 crc kubenswrapper[4727]: E1121 20:29:56.785697 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9\": container with ID starting with b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9 not found: ID does not exist" containerID="b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9" Nov 21 20:29:56 crc kubenswrapper[4727]: I1121 20:29:56.785724 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9"} err="failed to get container status \"b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9\": rpc error: code = NotFound desc = could not find container \"b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9\": container with ID starting with b8469e747c9780df3a0862cb41bdc8809f9e137f47f6ba91e771a718329b3bb9 not found: ID does not exist" Nov 21 20:29:57 crc kubenswrapper[4727]: I1121 20:29:57.514662 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e29c839-bc8b-40cf-94be-7e9053e9ede8" path="/var/lib/kubelet/pods/0e29c839-bc8b-40cf-94be-7e9053e9ede8/volumes" Nov 21 20:29:57 crc kubenswrapper[4727]: I1121 20:29:57.515841 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531d4889-3af4-4aca-b3ec-8bfa16484899" path="/var/lib/kubelet/pods/531d4889-3af4-4aca-b3ec-8bfa16484899/volumes" Nov 21 20:29:57 crc kubenswrapper[4727]: I1121 20:29:57.704216 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"501fa221-2089-47bd-999a-510bc80b25d6","Type":"ContainerStarted","Data":"563b946924c2672897e92250b5727fb306c89d6001f8724a59fccebed58c77d7"} Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.286209 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.303489 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.304657 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.379469 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-6gwj6"] Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.379750 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" podUID="4ca9df53-c075-43ff-9f60-c3533e94e265" containerName="dnsmasq-dns" containerID="cri-o://74afa0eb8d12a86508fdca6fb3e03955b33e1b580505b20dc062a2bc24c4af84" gracePeriod=10 Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.718311 4727 generic.go:334] "Generic (PLEG): container finished" podID="4ca9df53-c075-43ff-9f60-c3533e94e265" containerID="74afa0eb8d12a86508fdca6fb3e03955b33e1b580505b20dc062a2bc24c4af84" exitCode=0 Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.718559 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" event={"ID":"4ca9df53-c075-43ff-9f60-c3533e94e265","Type":"ContainerDied","Data":"74afa0eb8d12a86508fdca6fb3e03955b33e1b580505b20dc062a2bc24c4af84"} Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.724872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"501fa221-2089-47bd-999a-510bc80b25d6","Type":"ContainerStarted","Data":"b628742dede7af45f2ef7d7c1c8c90a83a845fccc2f73d2381c1a18cb6bf089a"} Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.746306 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.935558 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:58 crc kubenswrapper[4727]: I1121 20:29:58.999765 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wtv6f"] Nov 21 20:29:59 crc kubenswrapper[4727]: E1121 20:29:59.000305 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca9df53-c075-43ff-9f60-c3533e94e265" containerName="dnsmasq-dns" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.000323 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca9df53-c075-43ff-9f60-c3533e94e265" containerName="dnsmasq-dns" Nov 21 20:29:59 crc kubenswrapper[4727]: E1121 20:29:59.000362 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca9df53-c075-43ff-9f60-c3533e94e265" containerName="init" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.000370 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca9df53-c075-43ff-9f60-c3533e94e265" containerName="init" Nov 21 20:29:59 crc kubenswrapper[4727]: E1121 20:29:59.000396 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerName="extract-utilities" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.000403 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerName="extract-utilities" Nov 21 20:29:59 crc kubenswrapper[4727]: E1121 20:29:59.000417 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerName="registry-server" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.000424 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerName="registry-server" Nov 21 20:29:59 crc kubenswrapper[4727]: E1121 20:29:59.000437 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerName="extract-content" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.000443 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerName="extract-content" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.000649 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ca9df53-c075-43ff-9f60-c3533e94e265" containerName="dnsmasq-dns" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.000690 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="531d4889-3af4-4aca-b3ec-8bfa16484899" containerName="registry-server" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.001558 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.004940 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.005197 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.010989 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtv6f"] Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.102510 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-config\") pod \"4ca9df53-c075-43ff-9f60-c3533e94e265\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.102616 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-nb\") pod \"4ca9df53-c075-43ff-9f60-c3533e94e265\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.102703 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz26m\" (UniqueName: \"kubernetes.io/projected/4ca9df53-c075-43ff-9f60-c3533e94e265-kube-api-access-pz26m\") pod \"4ca9df53-c075-43ff-9f60-c3533e94e265\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.102816 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-swift-storage-0\") pod \"4ca9df53-c075-43ff-9f60-c3533e94e265\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.102873 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-sb\") pod \"4ca9df53-c075-43ff-9f60-c3533e94e265\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.102902 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-svc\") pod \"4ca9df53-c075-43ff-9f60-c3533e94e265\" (UID: \"4ca9df53-c075-43ff-9f60-c3533e94e265\") " Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.103225 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-config-data\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.103390 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.103425 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-scripts\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.103556 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f858c\" (UniqueName: \"kubernetes.io/projected/84267ae4-f0e0-409a-b261-cc9344c4c47b-kube-api-access-f858c\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.111630 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ca9df53-c075-43ff-9f60-c3533e94e265-kube-api-access-pz26m" (OuterVolumeSpecName: "kube-api-access-pz26m") pod "4ca9df53-c075-43ff-9f60-c3533e94e265" (UID: "4ca9df53-c075-43ff-9f60-c3533e94e265"). InnerVolumeSpecName "kube-api-access-pz26m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.170947 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-config" (OuterVolumeSpecName: "config") pod "4ca9df53-c075-43ff-9f60-c3533e94e265" (UID: "4ca9df53-c075-43ff-9f60-c3533e94e265"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.174197 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4ca9df53-c075-43ff-9f60-c3533e94e265" (UID: "4ca9df53-c075-43ff-9f60-c3533e94e265"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.176695 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ca9df53-c075-43ff-9f60-c3533e94e265" (UID: "4ca9df53-c075-43ff-9f60-c3533e94e265"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.202040 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4ca9df53-c075-43ff-9f60-c3533e94e265" (UID: "4ca9df53-c075-43ff-9f60-c3533e94e265"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.206984 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ca9df53-c075-43ff-9f60-c3533e94e265" (UID: "4ca9df53-c075-43ff-9f60-c3533e94e265"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.207900 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f858c\" (UniqueName: \"kubernetes.io/projected/84267ae4-f0e0-409a-b261-cc9344c4c47b-kube-api-access-f858c\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.208125 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-config-data\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.208203 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.208241 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-scripts\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.208322 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.208332 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.208345 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz26m\" (UniqueName: \"kubernetes.io/projected/4ca9df53-c075-43ff-9f60-c3533e94e265-kube-api-access-pz26m\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.208355 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.208363 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.208371 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ca9df53-c075-43ff-9f60-c3533e94e265-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.212746 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.213374 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-scripts\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.216643 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-config-data\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.226486 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f858c\" (UniqueName: \"kubernetes.io/projected/84267ae4-f0e0-409a-b261-cc9344c4c47b-kube-api-access-f858c\") pod \"nova-cell1-cell-mapping-wtv6f\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.339680 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.739264 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.740592 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-6gwj6" event={"ID":"4ca9df53-c075-43ff-9f60-c3533e94e265","Type":"ContainerDied","Data":"96c8ab71d2f7b3b35350d0288a683eca80516f26d335381652428b627fdbe9ea"} Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.740668 4727 scope.go:117] "RemoveContainer" containerID="74afa0eb8d12a86508fdca6fb3e03955b33e1b580505b20dc062a2bc24c4af84" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.787325 4727 scope.go:117] "RemoveContainer" containerID="001061385a6b4431ad3e3a3a1145b7d7205746a7c1998328be0c09f0f442e3af" Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.788774 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-6gwj6"] Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.814283 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-6gwj6"] Nov 21 20:29:59 crc kubenswrapper[4727]: I1121 20:29:59.836091 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtv6f"] Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.142817 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655"] Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.144975 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.147675 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.147849 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.166870 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655"] Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.233084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc194da3-c45c-4bc5-a77f-3517cd806a6a-secret-volume\") pod \"collect-profiles-29395950-ng655\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.233146 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwgsh\" (UniqueName: \"kubernetes.io/projected/cc194da3-c45c-4bc5-a77f-3517cd806a6a-kube-api-access-lwgsh\") pod \"collect-profiles-29395950-ng655\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.233179 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc194da3-c45c-4bc5-a77f-3517cd806a6a-config-volume\") pod \"collect-profiles-29395950-ng655\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.336148 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc194da3-c45c-4bc5-a77f-3517cd806a6a-secret-volume\") pod \"collect-profiles-29395950-ng655\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.336401 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwgsh\" (UniqueName: \"kubernetes.io/projected/cc194da3-c45c-4bc5-a77f-3517cd806a6a-kube-api-access-lwgsh\") pod \"collect-profiles-29395950-ng655\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.338168 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc194da3-c45c-4bc5-a77f-3517cd806a6a-config-volume\") pod \"collect-profiles-29395950-ng655\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.339118 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc194da3-c45c-4bc5-a77f-3517cd806a6a-config-volume\") pod \"collect-profiles-29395950-ng655\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.342880 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc194da3-c45c-4bc5-a77f-3517cd806a6a-secret-volume\") pod \"collect-profiles-29395950-ng655\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.361905 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwgsh\" (UniqueName: \"kubernetes.io/projected/cc194da3-c45c-4bc5-a77f-3517cd806a6a-kube-api-access-lwgsh\") pod \"collect-profiles-29395950-ng655\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.504927 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.774689 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtv6f" event={"ID":"84267ae4-f0e0-409a-b261-cc9344c4c47b","Type":"ContainerStarted","Data":"832c9b9c75b623f0eff4096044448f6cd1876795d47a0563d1d16af0e6361337"} Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.775063 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtv6f" event={"ID":"84267ae4-f0e0-409a-b261-cc9344c4c47b","Type":"ContainerStarted","Data":"850259f34d0478c24e877609ab39d040c457ede2f98189e39029b12394ee5a88"} Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.801243 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wtv6f" podStartSLOduration=2.8012218190000002 podStartE2EDuration="2.801221819s" podCreationTimestamp="2025-11-21 20:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:30:00.789600648 +0000 UTC m=+1405.975785702" watchObservedRunningTime="2025-11-21 20:30:00.801221819 +0000 UTC m=+1405.987406883" Nov 21 20:30:00 crc kubenswrapper[4727]: W1121 20:30:00.949005 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc194da3_c45c_4bc5_a77f_3517cd806a6a.slice/crio-e0935e4af5b9715b6d669e01c0160e52395f4532b54df40de8db1897f973da41 WatchSource:0}: Error finding container e0935e4af5b9715b6d669e01c0160e52395f4532b54df40de8db1897f973da41: Status 404 returned error can't find the container with id e0935e4af5b9715b6d669e01c0160e52395f4532b54df40de8db1897f973da41 Nov 21 20:30:00 crc kubenswrapper[4727]: I1121 20:30:00.957188 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655"] Nov 21 20:30:01 crc kubenswrapper[4727]: I1121 20:30:01.520575 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ca9df53-c075-43ff-9f60-c3533e94e265" path="/var/lib/kubelet/pods/4ca9df53-c075-43ff-9f60-c3533e94e265/volumes" Nov 21 20:30:01 crc kubenswrapper[4727]: I1121 20:30:01.811358 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc194da3-c45c-4bc5-a77f-3517cd806a6a" containerID="59679e728480fb7d7c08c5500500bb4a47e780e3a222b856cd1e2b615361ef92" exitCode=0 Nov 21 20:30:01 crc kubenswrapper[4727]: I1121 20:30:01.811866 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" event={"ID":"cc194da3-c45c-4bc5-a77f-3517cd806a6a","Type":"ContainerDied","Data":"59679e728480fb7d7c08c5500500bb4a47e780e3a222b856cd1e2b615361ef92"} Nov 21 20:30:01 crc kubenswrapper[4727]: I1121 20:30:01.811907 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" event={"ID":"cc194da3-c45c-4bc5-a77f-3517cd806a6a","Type":"ContainerStarted","Data":"e0935e4af5b9715b6d669e01c0160e52395f4532b54df40de8db1897f973da41"} Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.758442 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.832663 4727 generic.go:334] "Generic (PLEG): container finished" podID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerID="513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8" exitCode=137 Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.832892 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.834024 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e586ddd-f8e8-4589-8b0f-bea576194a57","Type":"ContainerDied","Data":"513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8"} Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.834058 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2e586ddd-f8e8-4589-8b0f-bea576194a57","Type":"ContainerDied","Data":"80c935f797385e3fc9c22f46f202a846b5b8063c0d35d0c99dc0b62c6ef91188"} Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.834079 4727 scope.go:117] "RemoveContainer" containerID="513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8" Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.893849 4727 scope.go:117] "RemoveContainer" containerID="6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515" Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.902743 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f48b\" (UniqueName: \"kubernetes.io/projected/2e586ddd-f8e8-4589-8b0f-bea576194a57-kube-api-access-7f48b\") pod \"2e586ddd-f8e8-4589-8b0f-bea576194a57\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.902823 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-combined-ca-bundle\") pod \"2e586ddd-f8e8-4589-8b0f-bea576194a57\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.902895 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-config-data\") pod \"2e586ddd-f8e8-4589-8b0f-bea576194a57\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.903184 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-scripts\") pod \"2e586ddd-f8e8-4589-8b0f-bea576194a57\" (UID: \"2e586ddd-f8e8-4589-8b0f-bea576194a57\") " Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.909374 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-scripts" (OuterVolumeSpecName: "scripts") pod "2e586ddd-f8e8-4589-8b0f-bea576194a57" (UID: "2e586ddd-f8e8-4589-8b0f-bea576194a57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:02 crc kubenswrapper[4727]: I1121 20:30:02.912567 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e586ddd-f8e8-4589-8b0f-bea576194a57-kube-api-access-7f48b" (OuterVolumeSpecName: "kube-api-access-7f48b") pod "2e586ddd-f8e8-4589-8b0f-bea576194a57" (UID: "2e586ddd-f8e8-4589-8b0f-bea576194a57"). InnerVolumeSpecName "kube-api-access-7f48b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.014784 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.015097 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f48b\" (UniqueName: \"kubernetes.io/projected/2e586ddd-f8e8-4589-8b0f-bea576194a57-kube-api-access-7f48b\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.083027 4727 scope.go:117] "RemoveContainer" containerID="ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.109701 4727 scope.go:117] "RemoveContainer" containerID="9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.120394 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-config-data" (OuterVolumeSpecName: "config-data") pod "2e586ddd-f8e8-4589-8b0f-bea576194a57" (UID: "2e586ddd-f8e8-4589-8b0f-bea576194a57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.146842 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e586ddd-f8e8-4589-8b0f-bea576194a57" (UID: "2e586ddd-f8e8-4589-8b0f-bea576194a57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.147143 4727 scope.go:117] "RemoveContainer" containerID="513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8" Nov 21 20:30:03 crc kubenswrapper[4727]: E1121 20:30:03.147541 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8\": container with ID starting with 513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8 not found: ID does not exist" containerID="513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.147565 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8"} err="failed to get container status \"513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8\": rpc error: code = NotFound desc = could not find container \"513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8\": container with ID starting with 513dc3359b81e6d0a6b81cf2364f1d77ff09ab082a1bc558d745dbf4886bc4d8 not found: ID does not exist" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.147588 4727 scope.go:117] "RemoveContainer" containerID="6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515" Nov 21 20:30:03 crc kubenswrapper[4727]: E1121 20:30:03.148131 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515\": container with ID starting with 6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515 not found: ID does not exist" containerID="6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.148170 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515"} err="failed to get container status \"6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515\": rpc error: code = NotFound desc = could not find container \"6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515\": container with ID starting with 6d0b879e49b72410a5eb0004bf3536e83a9a664df74861c622fdc096b2978515 not found: ID does not exist" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.148197 4727 scope.go:117] "RemoveContainer" containerID="ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242" Nov 21 20:30:03 crc kubenswrapper[4727]: E1121 20:30:03.148481 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242\": container with ID starting with ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242 not found: ID does not exist" containerID="ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.148506 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242"} err="failed to get container status \"ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242\": rpc error: code = NotFound desc = could not find container \"ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242\": container with ID starting with ab579a0f97dbc547529d223501fab1c20835d6ca06683d5c23a8a278aa15d242 not found: ID does not exist" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.148520 4727 scope.go:117] "RemoveContainer" containerID="9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276" Nov 21 20:30:03 crc kubenswrapper[4727]: E1121 20:30:03.148712 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276\": container with ID starting with 9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276 not found: ID does not exist" containerID="9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.148727 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276"} err="failed to get container status \"9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276\": rpc error: code = NotFound desc = could not find container \"9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276\": container with ID starting with 9e28fce8206f14b48d75e951137186c76848b79f17b989ec2244f9965a404276 not found: ID does not exist" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.220059 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.220116 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e586ddd-f8e8-4589-8b0f-bea576194a57-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.237459 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.321128 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc194da3-c45c-4bc5-a77f-3517cd806a6a-config-volume\") pod \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.321250 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc194da3-c45c-4bc5-a77f-3517cd806a6a-secret-volume\") pod \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.321304 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwgsh\" (UniqueName: \"kubernetes.io/projected/cc194da3-c45c-4bc5-a77f-3517cd806a6a-kube-api-access-lwgsh\") pod \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\" (UID: \"cc194da3-c45c-4bc5-a77f-3517cd806a6a\") " Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.323416 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc194da3-c45c-4bc5-a77f-3517cd806a6a-config-volume" (OuterVolumeSpecName: "config-volume") pod "cc194da3-c45c-4bc5-a77f-3517cd806a6a" (UID: "cc194da3-c45c-4bc5-a77f-3517cd806a6a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.326841 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc194da3-c45c-4bc5-a77f-3517cd806a6a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cc194da3-c45c-4bc5-a77f-3517cd806a6a" (UID: "cc194da3-c45c-4bc5-a77f-3517cd806a6a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.327005 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc194da3-c45c-4bc5-a77f-3517cd806a6a-kube-api-access-lwgsh" (OuterVolumeSpecName: "kube-api-access-lwgsh") pod "cc194da3-c45c-4bc5-a77f-3517cd806a6a" (UID: "cc194da3-c45c-4bc5-a77f-3517cd806a6a"). InnerVolumeSpecName "kube-api-access-lwgsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.424165 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc194da3-c45c-4bc5-a77f-3517cd806a6a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.424218 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwgsh\" (UniqueName: \"kubernetes.io/projected/cc194da3-c45c-4bc5-a77f-3517cd806a6a-kube-api-access-lwgsh\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.424233 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc194da3-c45c-4bc5-a77f-3517cd806a6a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.492569 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.545684 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.545731 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 21 20:30:03 crc kubenswrapper[4727]: E1121 20:30:03.546247 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-listener" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.546264 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-listener" Nov 21 20:30:03 crc kubenswrapper[4727]: E1121 20:30:03.546284 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-notifier" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.546292 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-notifier" Nov 21 20:30:03 crc kubenswrapper[4727]: E1121 20:30:03.546335 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc194da3-c45c-4bc5-a77f-3517cd806a6a" containerName="collect-profiles" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.546342 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc194da3-c45c-4bc5-a77f-3517cd806a6a" containerName="collect-profiles" Nov 21 20:30:03 crc kubenswrapper[4727]: E1121 20:30:03.546352 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-api" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.546360 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-api" Nov 21 20:30:03 crc kubenswrapper[4727]: E1121 20:30:03.546378 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-evaluator" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.546387 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-evaluator" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.546661 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-notifier" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.546683 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-evaluator" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.546698 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-listener" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.546717 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc194da3-c45c-4bc5-a77f-3517cd806a6a" containerName="collect-profiles" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.546741 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" containerName="aodh-api" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.549933 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.550071 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.554297 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.554552 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5w2vq" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.554404 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.554699 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.554929 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.630262 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.630376 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-scripts\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.630512 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-public-tls-certs\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.630547 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t94q\" (UniqueName: \"kubernetes.io/projected/9e00f1d6-6b39-4c43-9459-1cc521827332-kube-api-access-9t94q\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.630708 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-config-data\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.630823 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-internal-tls-certs\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.733257 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-config-data\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.733349 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-internal-tls-certs\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.733390 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.733449 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-scripts\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.733513 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-public-tls-certs\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.733535 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t94q\" (UniqueName: \"kubernetes.io/projected/9e00f1d6-6b39-4c43-9459-1cc521827332-kube-api-access-9t94q\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.738801 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-scripts\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.738837 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-internal-tls-certs\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.739862 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.741048 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-config-data\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.748459 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-public-tls-certs\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.749800 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t94q\" (UniqueName: \"kubernetes.io/projected/9e00f1d6-6b39-4c43-9459-1cc521827332-kube-api-access-9t94q\") pod \"aodh-0\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " pod="openstack/aodh-0" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.846285 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.847871 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655" event={"ID":"cc194da3-c45c-4bc5-a77f-3517cd806a6a","Type":"ContainerDied","Data":"e0935e4af5b9715b6d669e01c0160e52395f4532b54df40de8db1897f973da41"} Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.847910 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0935e4af5b9715b6d669e01c0160e52395f4532b54df40de8db1897f973da41" Nov 21 20:30:03 crc kubenswrapper[4727]: I1121 20:30:03.890671 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 20:30:04 crc kubenswrapper[4727]: I1121 20:30:04.378068 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 20:30:04 crc kubenswrapper[4727]: I1121 20:30:04.902255 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9e00f1d6-6b39-4c43-9459-1cc521827332","Type":"ContainerStarted","Data":"6eef736a3f6d4b34ee84fe85a2969576998735d4fea691cc1d3a6d6e432ab80e"} Nov 21 20:30:05 crc kubenswrapper[4727]: I1121 20:30:05.068819 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 20:30:05 crc kubenswrapper[4727]: I1121 20:30:05.068916 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 20:30:05 crc kubenswrapper[4727]: I1121 20:30:05.519897 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e586ddd-f8e8-4589-8b0f-bea576194a57" path="/var/lib/kubelet/pods/2e586ddd-f8e8-4589-8b0f-bea576194a57/volumes" Nov 21 20:30:05 crc kubenswrapper[4727]: I1121 20:30:05.914001 4727 generic.go:334] "Generic (PLEG): container finished" podID="84267ae4-f0e0-409a-b261-cc9344c4c47b" containerID="832c9b9c75b623f0eff4096044448f6cd1876795d47a0563d1d16af0e6361337" exitCode=0 Nov 21 20:30:05 crc kubenswrapper[4727]: I1121 20:30:05.914078 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtv6f" event={"ID":"84267ae4-f0e0-409a-b261-cc9344c4c47b","Type":"ContainerDied","Data":"832c9b9c75b623f0eff4096044448f6cd1876795d47a0563d1d16af0e6361337"} Nov 21 20:30:05 crc kubenswrapper[4727]: I1121 20:30:05.921864 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9e00f1d6-6b39-4c43-9459-1cc521827332","Type":"ContainerStarted","Data":"b321ba7151fcf2ebdc29a561f372eaed7e210db0194e95bf47c23d911f4af23c"} Nov 21 20:30:06 crc kubenswrapper[4727]: I1121 20:30:06.083193 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.252:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 20:30:06 crc kubenswrapper[4727]: I1121 20:30:06.083200 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.362807 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.443982 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-combined-ca-bundle\") pod \"84267ae4-f0e0-409a-b261-cc9344c4c47b\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.444042 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f858c\" (UniqueName: \"kubernetes.io/projected/84267ae4-f0e0-409a-b261-cc9344c4c47b-kube-api-access-f858c\") pod \"84267ae4-f0e0-409a-b261-cc9344c4c47b\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.444162 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-scripts\") pod \"84267ae4-f0e0-409a-b261-cc9344c4c47b\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.444185 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-config-data\") pod \"84267ae4-f0e0-409a-b261-cc9344c4c47b\" (UID: \"84267ae4-f0e0-409a-b261-cc9344c4c47b\") " Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.455299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84267ae4-f0e0-409a-b261-cc9344c4c47b-kube-api-access-f858c" (OuterVolumeSpecName: "kube-api-access-f858c") pod "84267ae4-f0e0-409a-b261-cc9344c4c47b" (UID: "84267ae4-f0e0-409a-b261-cc9344c4c47b"). InnerVolumeSpecName "kube-api-access-f858c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.455642 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-scripts" (OuterVolumeSpecName: "scripts") pod "84267ae4-f0e0-409a-b261-cc9344c4c47b" (UID: "84267ae4-f0e0-409a-b261-cc9344c4c47b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.474930 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84267ae4-f0e0-409a-b261-cc9344c4c47b" (UID: "84267ae4-f0e0-409a-b261-cc9344c4c47b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.488101 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-config-data" (OuterVolumeSpecName: "config-data") pod "84267ae4-f0e0-409a-b261-cc9344c4c47b" (UID: "84267ae4-f0e0-409a-b261-cc9344c4c47b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.546869 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.546905 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.546918 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84267ae4-f0e0-409a-b261-cc9344c4c47b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.546931 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f858c\" (UniqueName: \"kubernetes.io/projected/84267ae4-f0e0-409a-b261-cc9344c4c47b-kube-api-access-f858c\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.951321 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wtv6f" event={"ID":"84267ae4-f0e0-409a-b261-cc9344c4c47b","Type":"ContainerDied","Data":"850259f34d0478c24e877609ab39d040c457ede2f98189e39029b12394ee5a88"} Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.951359 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="850259f34d0478c24e877609ab39d040c457ede2f98189e39029b12394ee5a88" Nov 21 20:30:07 crc kubenswrapper[4727]: I1121 20:30:07.951413 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wtv6f" Nov 21 20:30:07 crc kubenswrapper[4727]: E1121 20:30:07.977269 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openstack-k8s-operators/sg-core:latest: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" image="quay.io/openstack-k8s-operators/sg-core:latest" Nov 21 20:30:07 crc kubenswrapper[4727]: E1121 20:30:07.977451 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbmhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(501fa221-2089-47bd-999a-510bc80b25d6): ErrImagePull: initializing source docker://quay.io/openstack-k8s-operators/sg-core:latest: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" logger="UnhandledError" Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.111546 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.112395 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerName="nova-api-api" containerID="cri-o://cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013" gracePeriod=30 Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.112607 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerName="nova-api-log" containerID="cri-o://2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0" gracePeriod=30 Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.125947 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.126203 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4932d865-5b5e-481d-bf79-68f449c97745" containerName="nova-scheduler-scheduler" containerID="cri-o://d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a" gracePeriod=30 Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.147543 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.147795 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-log" containerID="cri-o://6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114" gracePeriod=30 Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.147890 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-metadata" containerID="cri-o://6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2" gracePeriod=30 Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.966505 4727 generic.go:334] "Generic (PLEG): container finished" podID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerID="6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114" exitCode=143 Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.966819 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26527e17-f1bb-4a28-a8ef-4b7fc0654e91","Type":"ContainerDied","Data":"6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114"} Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.968853 4727 generic.go:334] "Generic (PLEG): container finished" podID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerID="2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0" exitCode=143 Nov 21 20:30:08 crc kubenswrapper[4727]: I1121 20:30:08.968886 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"05dc461f-0e12-42c3-9cab-44bdb182c333","Type":"ContainerDied","Data":"2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0"} Nov 21 20:30:09 crc kubenswrapper[4727]: E1121 20:30:09.273950 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"initializing source docker://quay.io/openstack-k8s-operators/sg-core:latest: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)\"" pod="openstack/ceilometer-0" podUID="501fa221-2089-47bd-999a-510bc80b25d6" Nov 21 20:30:09 crc kubenswrapper[4727]: I1121 20:30:09.982833 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"501fa221-2089-47bd-999a-510bc80b25d6","Type":"ContainerStarted","Data":"ee630afd9cd1492631b5367f56b7ec69885aecbd27160685948f970f632ff98e"} Nov 21 20:30:09 crc kubenswrapper[4727]: I1121 20:30:09.984277 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 20:30:09 crc kubenswrapper[4727]: E1121 20:30:09.986326 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/sg-core:latest\\\"\"" pod="openstack/ceilometer-0" podUID="501fa221-2089-47bd-999a-510bc80b25d6" Nov 21 20:30:10 crc kubenswrapper[4727]: E1121 20:30:10.993473 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/sg-core:latest\\\"\"" pod="openstack/ceilometer-0" podUID="501fa221-2089-47bd-999a-510bc80b25d6" Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.272468 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": read tcp 10.217.0.2:48544->10.217.0.245:8775: read: connection reset by peer" Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.272497 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.245:8775/\": read tcp 10.217.0.2:48542->10.217.0.245:8775: read: connection reset by peer" Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.840498 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.856202 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.959894 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-public-tls-certs\") pod \"05dc461f-0e12-42c3-9cab-44bdb182c333\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.960045 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05dc461f-0e12-42c3-9cab-44bdb182c333-logs\") pod \"05dc461f-0e12-42c3-9cab-44bdb182c333\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.960113 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcf7k\" (UniqueName: \"kubernetes.io/projected/05dc461f-0e12-42c3-9cab-44bdb182c333-kube-api-access-qcf7k\") pod \"05dc461f-0e12-42c3-9cab-44bdb182c333\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.960151 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-logs\") pod \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.960227 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-config-data\") pod \"05dc461f-0e12-42c3-9cab-44bdb182c333\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.960295 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-internal-tls-certs\") pod \"05dc461f-0e12-42c3-9cab-44bdb182c333\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.960402 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcbhb\" (UniqueName: \"kubernetes.io/projected/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-kube-api-access-bcbhb\") pod \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.960480 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-config-data\") pod \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.960537 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-nova-metadata-tls-certs\") pod \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.960589 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-combined-ca-bundle\") pod \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\" (UID: \"26527e17-f1bb-4a28-a8ef-4b7fc0654e91\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.960618 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-combined-ca-bundle\") pod \"05dc461f-0e12-42c3-9cab-44bdb182c333\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.961868 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-logs" (OuterVolumeSpecName: "logs") pod "26527e17-f1bb-4a28-a8ef-4b7fc0654e91" (UID: "26527e17-f1bb-4a28-a8ef-4b7fc0654e91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.961907 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05dc461f-0e12-42c3-9cab-44bdb182c333-logs" (OuterVolumeSpecName: "logs") pod "05dc461f-0e12-42c3-9cab-44bdb182c333" (UID: "05dc461f-0e12-42c3-9cab-44bdb182c333"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.963240 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05dc461f-0e12-42c3-9cab-44bdb182c333-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.963266 4727 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-logs\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.967985 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05dc461f-0e12-42c3-9cab-44bdb182c333-kube-api-access-qcf7k" (OuterVolumeSpecName: "kube-api-access-qcf7k") pod "05dc461f-0e12-42c3-9cab-44bdb182c333" (UID: "05dc461f-0e12-42c3-9cab-44bdb182c333"). InnerVolumeSpecName "kube-api-access-qcf7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:30:11 crc kubenswrapper[4727]: I1121 20:30:11.979751 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-kube-api-access-bcbhb" (OuterVolumeSpecName: "kube-api-access-bcbhb") pod "26527e17-f1bb-4a28-a8ef-4b7fc0654e91" (UID: "26527e17-f1bb-4a28-a8ef-4b7fc0654e91"). InnerVolumeSpecName "kube-api-access-bcbhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.012664 4727 generic.go:334] "Generic (PLEG): container finished" podID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerID="cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013" exitCode=0 Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.012824 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.013143 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26527e17-f1bb-4a28-a8ef-4b7fc0654e91" (UID: "26527e17-f1bb-4a28-a8ef-4b7fc0654e91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.013186 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"05dc461f-0e12-42c3-9cab-44bdb182c333","Type":"ContainerDied","Data":"cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013"} Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.013220 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"05dc461f-0e12-42c3-9cab-44bdb182c333","Type":"ContainerDied","Data":"85056f25e1739bef3006e5a189b0b10e7917b1760c31547b73903aec0a7e7551"} Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.013239 4727 scope.go:117] "RemoveContainer" containerID="cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.028620 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-config-data" (OuterVolumeSpecName: "config-data") pod "05dc461f-0e12-42c3-9cab-44bdb182c333" (UID: "05dc461f-0e12-42c3-9cab-44bdb182c333"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.030109 4727 generic.go:334] "Generic (PLEG): container finished" podID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerID="6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2" exitCode=0 Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.030204 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.030646 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05dc461f-0e12-42c3-9cab-44bdb182c333" (UID: "05dc461f-0e12-42c3-9cab-44bdb182c333"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.030232 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26527e17-f1bb-4a28-a8ef-4b7fc0654e91","Type":"ContainerDied","Data":"6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2"} Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.031069 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26527e17-f1bb-4a28-a8ef-4b7fc0654e91","Type":"ContainerDied","Data":"f5094b787f50cc67ca20b983c2235374a1d918fae5e39e649fe8869f9c820bf1"} Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.043208 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-config-data" (OuterVolumeSpecName: "config-data") pod "26527e17-f1bb-4a28-a8ef-4b7fc0654e91" (UID: "26527e17-f1bb-4a28-a8ef-4b7fc0654e91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.061510 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "05dc461f-0e12-42c3-9cab-44bdb182c333" (UID: "05dc461f-0e12-42c3-9cab-44bdb182c333"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.064246 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "05dc461f-0e12-42c3-9cab-44bdb182c333" (UID: "05dc461f-0e12-42c3-9cab-44bdb182c333"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.064615 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-internal-tls-certs\") pod \"05dc461f-0e12-42c3-9cab-44bdb182c333\" (UID: \"05dc461f-0e12-42c3-9cab-44bdb182c333\") " Nov 21 20:30:12 crc kubenswrapper[4727]: W1121 20:30:12.064731 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/05dc461f-0e12-42c3-9cab-44bdb182c333/volumes/kubernetes.io~secret/internal-tls-certs Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.064747 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "05dc461f-0e12-42c3-9cab-44bdb182c333" (UID: "05dc461f-0e12-42c3-9cab-44bdb182c333"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.065217 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcbhb\" (UniqueName: \"kubernetes.io/projected/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-kube-api-access-bcbhb\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.065237 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.065249 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.065258 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.065271 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.065279 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcf7k\" (UniqueName: \"kubernetes.io/projected/05dc461f-0e12-42c3-9cab-44bdb182c333-kube-api-access-qcf7k\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.065287 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.065295 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05dc461f-0e12-42c3-9cab-44bdb182c333-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.070568 4727 scope.go:117] "RemoveContainer" containerID="2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.074216 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "26527e17-f1bb-4a28-a8ef-4b7fc0654e91" (UID: "26527e17-f1bb-4a28-a8ef-4b7fc0654e91"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.095243 4727 scope.go:117] "RemoveContainer" containerID="cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013" Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.095717 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013\": container with ID starting with cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013 not found: ID does not exist" containerID="cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.095749 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013"} err="failed to get container status \"cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013\": rpc error: code = NotFound desc = could not find container \"cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013\": container with ID starting with cce2d521c35dc4a431a83619f641b0ed90149e44f904a65b379c42b36dc76013 not found: ID does not exist" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.095773 4727 scope.go:117] "RemoveContainer" containerID="2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0" Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.096099 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0\": container with ID starting with 2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0 not found: ID does not exist" containerID="2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.096122 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0"} err="failed to get container status \"2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0\": rpc error: code = NotFound desc = could not find container \"2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0\": container with ID starting with 2f8cfaa20f2ac6d125e1dd4f01574c6b85f680f7b3b1ab2a8670ebf61bfd3da0 not found: ID does not exist" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.096135 4727 scope.go:117] "RemoveContainer" containerID="6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.127810 4727 scope.go:117] "RemoveContainer" containerID="6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.168655 4727 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26527e17-f1bb-4a28-a8ef-4b7fc0654e91-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.169513 4727 scope.go:117] "RemoveContainer" containerID="6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2" Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.170248 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2\": container with ID starting with 6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2 not found: ID does not exist" containerID="6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.171659 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2"} err="failed to get container status \"6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2\": rpc error: code = NotFound desc = could not find container \"6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2\": container with ID starting with 6d0f722eb658f10f93a039737243d0e003b8fd7d6be9b14b562443d186f552c2 not found: ID does not exist" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.171781 4727 scope.go:117] "RemoveContainer" containerID="6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114" Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.172491 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114\": container with ID starting with 6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114 not found: ID does not exist" containerID="6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.172565 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114"} err="failed to get container status \"6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114\": rpc error: code = NotFound desc = could not find container \"6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114\": container with ID starting with 6760f104b530e3472e88fb4ec50c73674e06d206f00b6bbe34bb8fe44b69b114 not found: ID does not exist" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.421364 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.446729 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.459209 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.459918 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-metadata" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.459938 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-metadata" Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.459947 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84267ae4-f0e0-409a-b261-cc9344c4c47b" containerName="nova-manage" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.459968 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="84267ae4-f0e0-409a-b261-cc9344c4c47b" containerName="nova-manage" Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.460000 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerName="nova-api-log" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.460008 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerName="nova-api-log" Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.460032 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-log" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.460039 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-log" Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.460059 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerName="nova-api-api" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.460068 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerName="nova-api-api" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.460390 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="84267ae4-f0e0-409a-b261-cc9344c4c47b" containerName="nova-manage" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.460411 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-log" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.460426 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerName="nova-api-log" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.460448 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" containerName="nova-api-api" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.460459 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" containerName="nova-metadata-metadata" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.461908 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.464217 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.464235 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.464735 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.483011 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.587238 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a is running failed: container process not found" containerID="d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.588145 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a is running failed: container process not found" containerID="d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.588324 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-public-tls-certs\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.588422 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-config-data\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.588470 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7918d86d-0dcf-4ec6-a875-6e195db4e361-logs\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.588532 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.588781 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.588992 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k56b\" (UniqueName: \"kubernetes.io/projected/7918d86d-0dcf-4ec6-a875-6e195db4e361-kube-api-access-4k56b\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.590356 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a is running failed: container process not found" containerID="d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 20:30:12 crc kubenswrapper[4727]: E1121 20:30:12.590398 4727 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4932d865-5b5e-481d-bf79-68f449c97745" containerName="nova-scheduler-scheduler" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.598629 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.621457 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.636384 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.641271 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.644069 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.644414 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.648413 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.691006 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b40e144-3a25-45f2-9ea2-f81e211510e2-config-data\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.691091 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.691279 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b40e144-3a25-45f2-9ea2-f81e211510e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.691467 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b40e144-3a25-45f2-9ea2-f81e211510e2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.691638 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ndvm\" (UniqueName: \"kubernetes.io/projected/6b40e144-3a25-45f2-9ea2-f81e211510e2-kube-api-access-9ndvm\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.691674 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b40e144-3a25-45f2-9ea2-f81e211510e2-logs\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.691804 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.691990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k56b\" (UniqueName: \"kubernetes.io/projected/7918d86d-0dcf-4ec6-a875-6e195db4e361-kube-api-access-4k56b\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.692242 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-public-tls-certs\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.692401 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-config-data\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.692848 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7918d86d-0dcf-4ec6-a875-6e195db4e361-logs\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.693413 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7918d86d-0dcf-4ec6-a875-6e195db4e361-logs\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.697494 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.698552 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.700309 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-public-tls-certs\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.701637 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7918d86d-0dcf-4ec6-a875-6e195db4e361-config-data\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.713694 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k56b\" (UniqueName: \"kubernetes.io/projected/7918d86d-0dcf-4ec6-a875-6e195db4e361-kube-api-access-4k56b\") pod \"nova-api-0\" (UID: \"7918d86d-0dcf-4ec6-a875-6e195db4e361\") " pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.795206 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b40e144-3a25-45f2-9ea2-f81e211510e2-config-data\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.795276 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b40e144-3a25-45f2-9ea2-f81e211510e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.795316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b40e144-3a25-45f2-9ea2-f81e211510e2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.795349 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ndvm\" (UniqueName: \"kubernetes.io/projected/6b40e144-3a25-45f2-9ea2-f81e211510e2-kube-api-access-9ndvm\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.795365 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b40e144-3a25-45f2-9ea2-f81e211510e2-logs\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.795862 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b40e144-3a25-45f2-9ea2-f81e211510e2-logs\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.799526 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b40e144-3a25-45f2-9ea2-f81e211510e2-config-data\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.799617 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b40e144-3a25-45f2-9ea2-f81e211510e2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.804476 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b40e144-3a25-45f2-9ea2-f81e211510e2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.816021 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ndvm\" (UniqueName: \"kubernetes.io/projected/6b40e144-3a25-45f2-9ea2-f81e211510e2-kube-api-access-9ndvm\") pod \"nova-metadata-0\" (UID: \"6b40e144-3a25-45f2-9ea2-f81e211510e2\") " pod="openstack/nova-metadata-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.872486 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.886797 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 20:30:12 crc kubenswrapper[4727]: I1121 20:30:12.985814 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.005835 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-combined-ca-bundle\") pod \"4932d865-5b5e-481d-bf79-68f449c97745\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.005934 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsrbt\" (UniqueName: \"kubernetes.io/projected/4932d865-5b5e-481d-bf79-68f449c97745-kube-api-access-xsrbt\") pod \"4932d865-5b5e-481d-bf79-68f449c97745\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.006136 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-config-data\") pod \"4932d865-5b5e-481d-bf79-68f449c97745\" (UID: \"4932d865-5b5e-481d-bf79-68f449c97745\") " Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.012572 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4932d865-5b5e-481d-bf79-68f449c97745-kube-api-access-xsrbt" (OuterVolumeSpecName: "kube-api-access-xsrbt") pod "4932d865-5b5e-481d-bf79-68f449c97745" (UID: "4932d865-5b5e-481d-bf79-68f449c97745"). InnerVolumeSpecName "kube-api-access-xsrbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.046381 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4932d865-5b5e-481d-bf79-68f449c97745" (UID: "4932d865-5b5e-481d-bf79-68f449c97745"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.055178 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-config-data" (OuterVolumeSpecName: "config-data") pod "4932d865-5b5e-481d-bf79-68f449c97745" (UID: "4932d865-5b5e-481d-bf79-68f449c97745"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.061290 4727 generic.go:334] "Generic (PLEG): container finished" podID="4932d865-5b5e-481d-bf79-68f449c97745" containerID="d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a" exitCode=0 Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.061323 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4932d865-5b5e-481d-bf79-68f449c97745","Type":"ContainerDied","Data":"d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a"} Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.061343 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4932d865-5b5e-481d-bf79-68f449c97745","Type":"ContainerDied","Data":"7057cb6ee20265ab8a0c871356984ed521c7450726cb8543a4b6c6a2907e86c4"} Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.061359 4727 scope.go:117] "RemoveContainer" containerID="d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.061485 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.109006 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.109055 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsrbt\" (UniqueName: \"kubernetes.io/projected/4932d865-5b5e-481d-bf79-68f449c97745-kube-api-access-xsrbt\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.109071 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4932d865-5b5e-481d-bf79-68f449c97745-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.112154 4727 scope.go:117] "RemoveContainer" containerID="d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.113061 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:30:13 crc kubenswrapper[4727]: E1121 20:30:13.113432 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a\": container with ID starting with d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a not found: ID does not exist" containerID="d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.113476 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a"} err="failed to get container status \"d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a\": rpc error: code = NotFound desc = could not find container \"d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a\": container with ID starting with d91f76eca9edd9bdc7eab2fafc1206947068587eb3e15c3d03c924fe4c43da4a not found: ID does not exist" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.139982 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.154114 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:30:13 crc kubenswrapper[4727]: E1121 20:30:13.154761 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4932d865-5b5e-481d-bf79-68f449c97745" containerName="nova-scheduler-scheduler" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.154787 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="4932d865-5b5e-481d-bf79-68f449c97745" containerName="nova-scheduler-scheduler" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.155023 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="4932d865-5b5e-481d-bf79-68f449c97745" containerName="nova-scheduler-scheduler" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.155926 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.158946 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.165240 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.211679 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b25ac5c-f457-49cb-9c54-15115c3a6108-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b25ac5c-f457-49cb-9c54-15115c3a6108\") " pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.211916 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8swqd\" (UniqueName: \"kubernetes.io/projected/1b25ac5c-f457-49cb-9c54-15115c3a6108-kube-api-access-8swqd\") pod \"nova-scheduler-0\" (UID: \"1b25ac5c-f457-49cb-9c54-15115c3a6108\") " pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.211952 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b25ac5c-f457-49cb-9c54-15115c3a6108-config-data\") pod \"nova-scheduler-0\" (UID: \"1b25ac5c-f457-49cb-9c54-15115c3a6108\") " pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.313642 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8swqd\" (UniqueName: \"kubernetes.io/projected/1b25ac5c-f457-49cb-9c54-15115c3a6108-kube-api-access-8swqd\") pod \"nova-scheduler-0\" (UID: \"1b25ac5c-f457-49cb-9c54-15115c3a6108\") " pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.313706 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b25ac5c-f457-49cb-9c54-15115c3a6108-config-data\") pod \"nova-scheduler-0\" (UID: \"1b25ac5c-f457-49cb-9c54-15115c3a6108\") " pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.313792 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b25ac5c-f457-49cb-9c54-15115c3a6108-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b25ac5c-f457-49cb-9c54-15115c3a6108\") " pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.318719 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b25ac5c-f457-49cb-9c54-15115c3a6108-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b25ac5c-f457-49cb-9c54-15115c3a6108\") " pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.319415 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b25ac5c-f457-49cb-9c54-15115c3a6108-config-data\") pod \"nova-scheduler-0\" (UID: \"1b25ac5c-f457-49cb-9c54-15115c3a6108\") " pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.329975 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8swqd\" (UniqueName: \"kubernetes.io/projected/1b25ac5c-f457-49cb-9c54-15115c3a6108-kube-api-access-8swqd\") pod \"nova-scheduler-0\" (UID: \"1b25ac5c-f457-49cb-9c54-15115c3a6108\") " pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.335557 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.335609 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.335650 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.336826 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c81a93a448fa0edbcf26720ad6bed9f6ecbbf23562d69ff342656c8199e62de"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.336871 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://5c81a93a448fa0edbcf26720ad6bed9f6ecbbf23562d69ff342656c8199e62de" gracePeriod=600 Nov 21 20:30:13 crc kubenswrapper[4727]: W1121 20:30:13.412796 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7918d86d_0dcf_4ec6_a875_6e195db4e361.slice/crio-5b78456be84695304d2653e031c3c0c42f8d143cfb9b39c2e9baafc2121536e6 WatchSource:0}: Error finding container 5b78456be84695304d2653e031c3c0c42f8d143cfb9b39c2e9baafc2121536e6: Status 404 returned error can't find the container with id 5b78456be84695304d2653e031c3c0c42f8d143cfb9b39c2e9baafc2121536e6 Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.422189 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.477296 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.549768 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05dc461f-0e12-42c3-9cab-44bdb182c333" path="/var/lib/kubelet/pods/05dc461f-0e12-42c3-9cab-44bdb182c333/volumes" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.551096 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26527e17-f1bb-4a28-a8ef-4b7fc0654e91" path="/var/lib/kubelet/pods/26527e17-f1bb-4a28-a8ef-4b7fc0654e91/volumes" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.551774 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4932d865-5b5e-481d-bf79-68f449c97745" path="/var/lib/kubelet/pods/4932d865-5b5e-481d-bf79-68f449c97745/volumes" Nov 21 20:30:13 crc kubenswrapper[4727]: I1121 20:30:13.552750 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.016159 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 20:30:14 crc kubenswrapper[4727]: W1121 20:30:14.016203 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b25ac5c_f457_49cb_9c54_15115c3a6108.slice/crio-b8dacc3a6c52f9736a3d9936607cb6291573604cbc8a52ab0962d39e329db8e3 WatchSource:0}: Error finding container b8dacc3a6c52f9736a3d9936607cb6291573604cbc8a52ab0962d39e329db8e3: Status 404 returned error can't find the container with id b8dacc3a6c52f9736a3d9936607cb6291573604cbc8a52ab0962d39e329db8e3 Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.078802 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b25ac5c-f457-49cb-9c54-15115c3a6108","Type":"ContainerStarted","Data":"b8dacc3a6c52f9736a3d9936607cb6291573604cbc8a52ab0962d39e329db8e3"} Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.082764 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="5c81a93a448fa0edbcf26720ad6bed9f6ecbbf23562d69ff342656c8199e62de" exitCode=0 Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.082873 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"5c81a93a448fa0edbcf26720ad6bed9f6ecbbf23562d69ff342656c8199e62de"} Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.082932 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9"} Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.082975 4727 scope.go:117] "RemoveContainer" containerID="12b90de2a7321048685d69f0637a8522048d88e44715706aab3817c22993e4d5" Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.085830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b40e144-3a25-45f2-9ea2-f81e211510e2","Type":"ContainerStarted","Data":"2249339879a4ddd2be66f67793cde2df1d298e407e033f0256b3228358c4ae55"} Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.085883 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b40e144-3a25-45f2-9ea2-f81e211510e2","Type":"ContainerStarted","Data":"09b6d25c63b5e6e2112bda33b27e2214130a5671f651089663366af0af7da2a2"} Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.085895 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b40e144-3a25-45f2-9ea2-f81e211510e2","Type":"ContainerStarted","Data":"7f2f3035dbb9e7302247eb42e18dc9b55b6f8898a137664e0a8cf93475819d96"} Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.102574 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7918d86d-0dcf-4ec6-a875-6e195db4e361","Type":"ContainerStarted","Data":"1f5cf6ba3be401ccbbe366feefd232d3f51200ec6fba70a3e61d304936d6ee58"} Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.102643 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7918d86d-0dcf-4ec6-a875-6e195db4e361","Type":"ContainerStarted","Data":"3ba3864ccab29a280dc53df3de5bf2800c6abdcbc7fbb03fe19bb310d7c8b1ff"} Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.102664 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7918d86d-0dcf-4ec6-a875-6e195db4e361","Type":"ContainerStarted","Data":"5b78456be84695304d2653e031c3c0c42f8d143cfb9b39c2e9baafc2121536e6"} Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.124077 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.124047675 podStartE2EDuration="2.124047675s" podCreationTimestamp="2025-11-21 20:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:30:14.119660318 +0000 UTC m=+1419.305845372" watchObservedRunningTime="2025-11-21 20:30:14.124047675 +0000 UTC m=+1419.310232709" Nov 21 20:30:14 crc kubenswrapper[4727]: I1121 20:30:14.162093 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.162072315 podStartE2EDuration="2.162072315s" podCreationTimestamp="2025-11-21 20:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:30:14.144750285 +0000 UTC m=+1419.330935339" watchObservedRunningTime="2025-11-21 20:30:14.162072315 +0000 UTC m=+1419.348257359" Nov 21 20:30:15 crc kubenswrapper[4727]: I1121 20:30:15.127536 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b25ac5c-f457-49cb-9c54-15115c3a6108","Type":"ContainerStarted","Data":"999e2b3694103065a3034a97d5743deb74de0e8ff3328acceff13255a7926ad2"} Nov 21 20:30:15 crc kubenswrapper[4727]: I1121 20:30:15.153812 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.153792123 podStartE2EDuration="2.153792123s" podCreationTimestamp="2025-11-21 20:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:30:15.143166846 +0000 UTC m=+1420.329351890" watchObservedRunningTime="2025-11-21 20:30:15.153792123 +0000 UTC m=+1420.339977187" Nov 21 20:30:15 crc kubenswrapper[4727]: E1121 20:30:15.267891 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified: can't talk to a V1 container registry" image="quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified" Nov 21 20:30:15 crc kubenswrapper[4727]: E1121 20:30:15.268149 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:aodh-evaluator,Image:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7h79h8dh699h54dhdch694hfdh56ch64h579hc7hbdh589h65dhd8h4h66bh578h5dbh659h8h546h57fhf7h597h5c7h5bch669h68fh7hd7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:aodh-evaluator-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t94q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aodh-0_openstack(9e00f1d6-6b39-4c43-9459-1cc521827332): ErrImagePull: initializing source docker://quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified: can't talk to a V1 container registry" logger="UnhandledError" Nov 21 20:30:17 crc kubenswrapper[4727]: I1121 20:30:17.986590 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 20:30:17 crc kubenswrapper[4727]: I1121 20:30:17.986862 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 20:30:18 crc kubenswrapper[4727]: I1121 20:30:18.478233 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 20:30:22 crc kubenswrapper[4727]: I1121 20:30:22.887207 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 20:30:22 crc kubenswrapper[4727]: I1121 20:30:22.887681 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 20:30:22 crc kubenswrapper[4727]: I1121 20:30:22.986582 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 20:30:22 crc kubenswrapper[4727]: I1121 20:30:22.986786 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 20:30:23 crc kubenswrapper[4727]: I1121 20:30:23.478595 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 20:30:23 crc kubenswrapper[4727]: I1121 20:30:23.513113 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 20:30:23 crc kubenswrapper[4727]: I1121 20:30:23.516248 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 20:30:23 crc kubenswrapper[4727]: I1121 20:30:23.905130 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7918d86d-0dcf-4ec6-a875-6e195db4e361" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.1:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 20:30:23 crc kubenswrapper[4727]: I1121 20:30:23.905141 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7918d86d-0dcf-4ec6-a875-6e195db4e361" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.1:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 20:30:24 crc kubenswrapper[4727]: I1121 20:30:24.008342 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6b40e144-3a25-45f2-9ea2-f81e211510e2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 20:30:24 crc kubenswrapper[4727]: I1121 20:30:24.008781 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6b40e144-3a25-45f2-9ea2-f81e211510e2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.2:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 20:30:24 crc kubenswrapper[4727]: I1121 20:30:24.230067 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"501fa221-2089-47bd-999a-510bc80b25d6","Type":"ContainerStarted","Data":"c2fc8f0085fa7ff687c281ef79ba4dedd7d888ecf209e0d77ec7e12d72ac9da0"} Nov 21 20:30:24 crc kubenswrapper[4727]: I1121 20:30:24.234431 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 20:30:24 crc kubenswrapper[4727]: I1121 20:30:24.271284 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.013236428 podStartE2EDuration="29.271261821s" podCreationTimestamp="2025-11-21 20:29:55 +0000 UTC" firstStartedPulling="2025-11-21 20:29:56.615950371 +0000 UTC m=+1401.802135415" lastFinishedPulling="2025-11-21 20:30:23.873975764 +0000 UTC m=+1429.060160808" observedRunningTime="2025-11-21 20:30:24.25216412 +0000 UTC m=+1429.438349184" watchObservedRunningTime="2025-11-21 20:30:24.271261821 +0000 UTC m=+1429.457446865" Nov 21 20:30:24 crc kubenswrapper[4727]: I1121 20:30:24.308469 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 20:30:25 crc kubenswrapper[4727]: E1121 20:30:25.387811 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" image="quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified" Nov 21 20:30:25 crc kubenswrapper[4727]: E1121 20:30:25.388396 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:aodh-notifier,Image:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7h79h8dh699h54dhdch694hfdh56ch64h579hc7hbdh589h65dhd8h4h66bh578h5dbh659h8h546h57fhf7h597h5c7h5bch669h68fh7hd7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:aodh-notifier-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t94q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aodh-0_openstack(9e00f1d6-6b39-4c43-9459-1cc521827332): ErrImagePull: initializing source docker://quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" logger="UnhandledError" Nov 21 20:30:26 crc kubenswrapper[4727]: E1121 20:30:26.246637 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"aodh-evaluator\" with ErrImagePull: \"initializing source docker://quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified: can't talk to a V1 container registry\", failed to \"StartContainer\" for \"aodh-notifier\" with ErrImagePull: \"initializing source docker://quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)\"]" pod="openstack/aodh-0" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" Nov 21 20:30:26 crc kubenswrapper[4727]: I1121 20:30:26.279075 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9e00f1d6-6b39-4c43-9459-1cc521827332","Type":"ContainerStarted","Data":"af2bbf0f5d4af6ed81b39a50c6c390a50909d0462289492ce92d0de400f921ae"} Nov 21 20:30:26 crc kubenswrapper[4727]: E1121 20:30:26.968602 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-notifier\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified\\\"\"" pod="openstack/aodh-0" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" Nov 21 20:30:27 crc kubenswrapper[4727]: I1121 20:30:27.292230 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9e00f1d6-6b39-4c43-9459-1cc521827332","Type":"ContainerStarted","Data":"1c84862119ef78331528e13cdd5a0f4a615f84ed8851eb27b6bd7e92b8ab6c9b"} Nov 21 20:30:27 crc kubenswrapper[4727]: E1121 20:30:27.294910 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-notifier\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified\\\"\"" pod="openstack/aodh-0" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" Nov 21 20:30:28 crc kubenswrapper[4727]: E1121 20:30:28.306779 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-notifier\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified\\\"\"" pod="openstack/aodh-0" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" Nov 21 20:30:29 crc kubenswrapper[4727]: I1121 20:30:29.923792 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 20:30:29 crc kubenswrapper[4727]: I1121 20:30:29.925279 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5a7c7dad-b024-4e09-b455-662514be19f2" containerName="kube-state-metrics" containerID="cri-o://282045bc3e701d3d33f8e3229e2c6828ddc9877abf750efa7398a0a1b99c73d3" gracePeriod=30 Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.006345 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.006584 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="b1eb35a6-ee1c-4bd1-bccb-a0fa44677308" containerName="mysqld-exporter" containerID="cri-o://031674f12ea3df1cf447a01260fd7d4f6d5168022609689228fc1042f1223603" gracePeriod=30 Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.432893 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1eb35a6-ee1c-4bd1-bccb-a0fa44677308" containerID="031674f12ea3df1cf447a01260fd7d4f6d5168022609689228fc1042f1223603" exitCode=2 Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.433264 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308","Type":"ContainerDied","Data":"031674f12ea3df1cf447a01260fd7d4f6d5168022609689228fc1042f1223603"} Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.436822 4727 generic.go:334] "Generic (PLEG): container finished" podID="5a7c7dad-b024-4e09-b455-662514be19f2" containerID="282045bc3e701d3d33f8e3229e2c6828ddc9877abf750efa7398a0a1b99c73d3" exitCode=2 Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.436852 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5a7c7dad-b024-4e09-b455-662514be19f2","Type":"ContainerDied","Data":"282045bc3e701d3d33f8e3229e2c6828ddc9877abf750efa7398a0a1b99c73d3"} Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.613512 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.692477 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhq9n\" (UniqueName: \"kubernetes.io/projected/5a7c7dad-b024-4e09-b455-662514be19f2-kube-api-access-xhq9n\") pod \"5a7c7dad-b024-4e09-b455-662514be19f2\" (UID: \"5a7c7dad-b024-4e09-b455-662514be19f2\") " Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.692754 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.704248 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7c7dad-b024-4e09-b455-662514be19f2-kube-api-access-xhq9n" (OuterVolumeSpecName: "kube-api-access-xhq9n") pod "5a7c7dad-b024-4e09-b455-662514be19f2" (UID: "5a7c7dad-b024-4e09-b455-662514be19f2"). InnerVolumeSpecName "kube-api-access-xhq9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.795690 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-combined-ca-bundle\") pod \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.796143 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-config-data\") pod \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.796332 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tp9s\" (UniqueName: \"kubernetes.io/projected/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-kube-api-access-7tp9s\") pod \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\" (UID: \"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308\") " Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.797053 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhq9n\" (UniqueName: \"kubernetes.io/projected/5a7c7dad-b024-4e09-b455-662514be19f2-kube-api-access-xhq9n\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.801033 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-kube-api-access-7tp9s" (OuterVolumeSpecName: "kube-api-access-7tp9s") pod "b1eb35a6-ee1c-4bd1-bccb-a0fa44677308" (UID: "b1eb35a6-ee1c-4bd1-bccb-a0fa44677308"). InnerVolumeSpecName "kube-api-access-7tp9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.833702 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1eb35a6-ee1c-4bd1-bccb-a0fa44677308" (UID: "b1eb35a6-ee1c-4bd1-bccb-a0fa44677308"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.862751 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-config-data" (OuterVolumeSpecName: "config-data") pod "b1eb35a6-ee1c-4bd1-bccb-a0fa44677308" (UID: "b1eb35a6-ee1c-4bd1-bccb-a0fa44677308"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.899498 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tp9s\" (UniqueName: \"kubernetes.io/projected/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-kube-api-access-7tp9s\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.899533 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:30 crc kubenswrapper[4727]: I1121 20:30:30.899545 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.448188 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5a7c7dad-b024-4e09-b455-662514be19f2","Type":"ContainerDied","Data":"fd5ccff63ed5accc23567efcf005a49220024c2ac57f198c8ed10cf905c517f3"} Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.448221 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.448484 4727 scope.go:117] "RemoveContainer" containerID="282045bc3e701d3d33f8e3229e2c6828ddc9877abf750efa7398a0a1b99c73d3" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.450605 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b1eb35a6-ee1c-4bd1-bccb-a0fa44677308","Type":"ContainerDied","Data":"b114006573f1571332cadfd5c445188444200c10a050b3de8d1b626f8bc7e37b"} Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.450662 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.509863 4727 scope.go:117] "RemoveContainer" containerID="031674f12ea3df1cf447a01260fd7d4f6d5168022609689228fc1042f1223603" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.532304 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.545658 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.569769 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.569817 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.579773 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 20:30:31 crc kubenswrapper[4727]: E1121 20:30:31.580400 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1eb35a6-ee1c-4bd1-bccb-a0fa44677308" containerName="mysqld-exporter" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.580425 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1eb35a6-ee1c-4bd1-bccb-a0fa44677308" containerName="mysqld-exporter" Nov 21 20:30:31 crc kubenswrapper[4727]: E1121 20:30:31.580457 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7c7dad-b024-4e09-b455-662514be19f2" containerName="kube-state-metrics" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.580485 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7c7dad-b024-4e09-b455-662514be19f2" containerName="kube-state-metrics" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.580766 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a7c7dad-b024-4e09-b455-662514be19f2" containerName="kube-state-metrics" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.580803 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1eb35a6-ee1c-4bd1-bccb-a0fa44677308" containerName="mysqld-exporter" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.581804 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.584873 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.585141 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.594435 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.598210 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.602218 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.602304 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.609391 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.622299 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.716855 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f73e967f-ae43-4ef4-8648-768d1602b07b-config-data\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.716918 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e14d03ba-964e-4f12-9c54-0aff9e874f1d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.717011 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73e967f-ae43-4ef4-8648-768d1602b07b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.717045 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/f73e967f-ae43-4ef4-8648-768d1602b07b-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.717070 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e14d03ba-964e-4f12-9c54-0aff9e874f1d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.717206 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46wcg\" (UniqueName: \"kubernetes.io/projected/f73e967f-ae43-4ef4-8648-768d1602b07b-kube-api-access-46wcg\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.717234 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14d03ba-964e-4f12-9c54-0aff9e874f1d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.717254 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdgf\" (UniqueName: \"kubernetes.io/projected/e14d03ba-964e-4f12-9c54-0aff9e874f1d-kube-api-access-2pdgf\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.820193 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/f73e967f-ae43-4ef4-8648-768d1602b07b-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.820299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e14d03ba-964e-4f12-9c54-0aff9e874f1d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.820505 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46wcg\" (UniqueName: \"kubernetes.io/projected/f73e967f-ae43-4ef4-8648-768d1602b07b-kube-api-access-46wcg\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.820563 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14d03ba-964e-4f12-9c54-0aff9e874f1d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.821272 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pdgf\" (UniqueName: \"kubernetes.io/projected/e14d03ba-964e-4f12-9c54-0aff9e874f1d-kube-api-access-2pdgf\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.821538 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f73e967f-ae43-4ef4-8648-768d1602b07b-config-data\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.821682 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e14d03ba-964e-4f12-9c54-0aff9e874f1d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.822072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73e967f-ae43-4ef4-8648-768d1602b07b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.826724 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/f73e967f-ae43-4ef4-8648-768d1602b07b-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.828040 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e14d03ba-964e-4f12-9c54-0aff9e874f1d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.828395 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14d03ba-964e-4f12-9c54-0aff9e874f1d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.829069 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73e967f-ae43-4ef4-8648-768d1602b07b-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.831492 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e14d03ba-964e-4f12-9c54-0aff9e874f1d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.833409 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f73e967f-ae43-4ef4-8648-768d1602b07b-config-data\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.841762 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pdgf\" (UniqueName: \"kubernetes.io/projected/e14d03ba-964e-4f12-9c54-0aff9e874f1d-kube-api-access-2pdgf\") pod \"kube-state-metrics-0\" (UID: \"e14d03ba-964e-4f12-9c54-0aff9e874f1d\") " pod="openstack/kube-state-metrics-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.848724 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46wcg\" (UniqueName: \"kubernetes.io/projected/f73e967f-ae43-4ef4-8648-768d1602b07b-kube-api-access-46wcg\") pod \"mysqld-exporter-0\" (UID: \"f73e967f-ae43-4ef4-8648-768d1602b07b\") " pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.917845 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 20:30:31 crc kubenswrapper[4727]: I1121 20:30:31.933394 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.343146 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.343680 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="ceilometer-central-agent" containerID="cri-o://563b946924c2672897e92250b5727fb306c89d6001f8724a59fccebed58c77d7" gracePeriod=30 Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.343781 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="proxy-httpd" containerID="cri-o://ee630afd9cd1492631b5367f56b7ec69885aecbd27160685948f970f632ff98e" gracePeriod=30 Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.343845 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="ceilometer-notification-agent" containerID="cri-o://b628742dede7af45f2ef7d7c1c8c90a83a845fccc2f73d2381c1a18cb6bf089a" gracePeriod=30 Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.344030 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="sg-core" containerID="cri-o://c2fc8f0085fa7ff687c281ef79ba4dedd7d888ecf209e0d77ec7e12d72ac9da0" gracePeriod=30 Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.431012 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.447184 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 20:30:32 crc kubenswrapper[4727]: W1121 20:30:32.465700 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf73e967f_ae43_4ef4_8648_768d1602b07b.slice/crio-481cac5bd3d1d75329b5f9ddceb1b9625487a8b62bea35f363bf5d097927a869 WatchSource:0}: Error finding container 481cac5bd3d1d75329b5f9ddceb1b9625487a8b62bea35f363bf5d097927a869: Status 404 returned error can't find the container with id 481cac5bd3d1d75329b5f9ddceb1b9625487a8b62bea35f363bf5d097927a869 Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.489245 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e14d03ba-964e-4f12-9c54-0aff9e874f1d","Type":"ContainerStarted","Data":"74d50c3684bbc56bd74245f2bc7a447a1597164491f3c452290c16415b008437"} Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.899480 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.900295 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.900844 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.901394 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.908474 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 20:30:32 crc kubenswrapper[4727]: I1121 20:30:32.917558 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.013678 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.027882 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.029129 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.505646 4727 generic.go:334] "Generic (PLEG): container finished" podID="501fa221-2089-47bd-999a-510bc80b25d6" containerID="c2fc8f0085fa7ff687c281ef79ba4dedd7d888ecf209e0d77ec7e12d72ac9da0" exitCode=2 Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.506092 4727 generic.go:334] "Generic (PLEG): container finished" podID="501fa221-2089-47bd-999a-510bc80b25d6" containerID="ee630afd9cd1492631b5367f56b7ec69885aecbd27160685948f970f632ff98e" exitCode=0 Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.506103 4727 generic.go:334] "Generic (PLEG): container finished" podID="501fa221-2089-47bd-999a-510bc80b25d6" containerID="563b946924c2672897e92250b5727fb306c89d6001f8724a59fccebed58c77d7" exitCode=0 Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.518044 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a7c7dad-b024-4e09-b455-662514be19f2" path="/var/lib/kubelet/pods/5a7c7dad-b024-4e09-b455-662514be19f2/volumes" Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.518613 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1eb35a6-ee1c-4bd1-bccb-a0fa44677308" path="/var/lib/kubelet/pods/b1eb35a6-ee1c-4bd1-bccb-a0fa44677308/volumes" Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.519266 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.519293 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"501fa221-2089-47bd-999a-510bc80b25d6","Type":"ContainerDied","Data":"c2fc8f0085fa7ff687c281ef79ba4dedd7d888ecf209e0d77ec7e12d72ac9da0"} Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.519312 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"501fa221-2089-47bd-999a-510bc80b25d6","Type":"ContainerDied","Data":"ee630afd9cd1492631b5367f56b7ec69885aecbd27160685948f970f632ff98e"} Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.519324 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"501fa221-2089-47bd-999a-510bc80b25d6","Type":"ContainerDied","Data":"563b946924c2672897e92250b5727fb306c89d6001f8724a59fccebed58c77d7"} Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.519335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f73e967f-ae43-4ef4-8648-768d1602b07b","Type":"ContainerStarted","Data":"effc3339c1d03cd02b1c54e75cec5f01c2fe681d3bcd524e07cfd52da2d2e2bc"} Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.519346 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f73e967f-ae43-4ef4-8648-768d1602b07b","Type":"ContainerStarted","Data":"481cac5bd3d1d75329b5f9ddceb1b9625487a8b62bea35f363bf5d097927a869"} Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.519356 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e14d03ba-964e-4f12-9c54-0aff9e874f1d","Type":"ContainerStarted","Data":"1b7769f11a8fa8f5194895fc9be7c40457ecca51a30a904e16cd19e0611ff384"} Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.529637 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.533264 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.069070053 podStartE2EDuration="2.533241376s" podCreationTimestamp="2025-11-21 20:30:31 +0000 UTC" firstStartedPulling="2025-11-21 20:30:32.469606931 +0000 UTC m=+1437.655791995" lastFinishedPulling="2025-11-21 20:30:32.933778274 +0000 UTC m=+1438.119963318" observedRunningTime="2025-11-21 20:30:33.524191287 +0000 UTC m=+1438.710376351" watchObservedRunningTime="2025-11-21 20:30:33.533241376 +0000 UTC m=+1438.719426430" Nov 21 20:30:33 crc kubenswrapper[4727]: I1121 20:30:33.565088 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.199893223 podStartE2EDuration="2.565069535s" podCreationTimestamp="2025-11-21 20:30:31 +0000 UTC" firstStartedPulling="2025-11-21 20:30:32.440952269 +0000 UTC m=+1437.627137313" lastFinishedPulling="2025-11-21 20:30:32.806128541 +0000 UTC m=+1437.992313625" observedRunningTime="2025-11-21 20:30:33.548467674 +0000 UTC m=+1438.734652718" watchObservedRunningTime="2025-11-21 20:30:33.565069535 +0000 UTC m=+1438.751254579" Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.538610 4727 generic.go:334] "Generic (PLEG): container finished" podID="501fa221-2089-47bd-999a-510bc80b25d6" containerID="b628742dede7af45f2ef7d7c1c8c90a83a845fccc2f73d2381c1a18cb6bf089a" exitCode=0 Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.538754 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"501fa221-2089-47bd-999a-510bc80b25d6","Type":"ContainerDied","Data":"b628742dede7af45f2ef7d7c1c8c90a83a845fccc2f73d2381c1a18cb6bf089a"} Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.751653 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6tjrt"] Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.755688 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.774208 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tjrt"] Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.829915 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-utilities\") pod \"community-operators-6tjrt\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.830093 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-catalog-content\") pod \"community-operators-6tjrt\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.830127 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jsvr\" (UniqueName: \"kubernetes.io/projected/2362ffc5-5fd6-467a-a7f9-e3a25581f176-kube-api-access-4jsvr\") pod \"community-operators-6tjrt\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.932114 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-catalog-content\") pod \"community-operators-6tjrt\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.932618 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-catalog-content\") pod \"community-operators-6tjrt\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.932757 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jsvr\" (UniqueName: \"kubernetes.io/projected/2362ffc5-5fd6-467a-a7f9-e3a25581f176-kube-api-access-4jsvr\") pod \"community-operators-6tjrt\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.933031 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-utilities\") pod \"community-operators-6tjrt\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.933319 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-utilities\") pod \"community-operators-6tjrt\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:34 crc kubenswrapper[4727]: I1121 20:30:34.952036 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jsvr\" (UniqueName: \"kubernetes.io/projected/2362ffc5-5fd6-467a-a7f9-e3a25581f176-kube-api-access-4jsvr\") pod \"community-operators-6tjrt\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.056921 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.080063 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.137947 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-run-httpd\") pod \"501fa221-2089-47bd-999a-510bc80b25d6\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.138369 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "501fa221-2089-47bd-999a-510bc80b25d6" (UID: "501fa221-2089-47bd-999a-510bc80b25d6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.138609 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-log-httpd\") pod \"501fa221-2089-47bd-999a-510bc80b25d6\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.138980 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "501fa221-2089-47bd-999a-510bc80b25d6" (UID: "501fa221-2089-47bd-999a-510bc80b25d6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.139095 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-scripts\") pod \"501fa221-2089-47bd-999a-510bc80b25d6\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.139128 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-sg-core-conf-yaml\") pod \"501fa221-2089-47bd-999a-510bc80b25d6\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.140248 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbmhw\" (UniqueName: \"kubernetes.io/projected/501fa221-2089-47bd-999a-510bc80b25d6-kube-api-access-mbmhw\") pod \"501fa221-2089-47bd-999a-510bc80b25d6\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.140326 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-config-data\") pod \"501fa221-2089-47bd-999a-510bc80b25d6\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.140361 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-combined-ca-bundle\") pod \"501fa221-2089-47bd-999a-510bc80b25d6\" (UID: \"501fa221-2089-47bd-999a-510bc80b25d6\") " Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.141631 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.141660 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/501fa221-2089-47bd-999a-510bc80b25d6-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.143436 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-scripts" (OuterVolumeSpecName: "scripts") pod "501fa221-2089-47bd-999a-510bc80b25d6" (UID: "501fa221-2089-47bd-999a-510bc80b25d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.158132 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/501fa221-2089-47bd-999a-510bc80b25d6-kube-api-access-mbmhw" (OuterVolumeSpecName: "kube-api-access-mbmhw") pod "501fa221-2089-47bd-999a-510bc80b25d6" (UID: "501fa221-2089-47bd-999a-510bc80b25d6"). InnerVolumeSpecName "kube-api-access-mbmhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.179144 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "501fa221-2089-47bd-999a-510bc80b25d6" (UID: "501fa221-2089-47bd-999a-510bc80b25d6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.243916 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.243958 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.243986 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbmhw\" (UniqueName: \"kubernetes.io/projected/501fa221-2089-47bd-999a-510bc80b25d6-kube-api-access-mbmhw\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.267060 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "501fa221-2089-47bd-999a-510bc80b25d6" (UID: "501fa221-2089-47bd-999a-510bc80b25d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.301067 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-config-data" (OuterVolumeSpecName: "config-data") pod "501fa221-2089-47bd-999a-510bc80b25d6" (UID: "501fa221-2089-47bd-999a-510bc80b25d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.345662 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.345704 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/501fa221-2089-47bd-999a-510bc80b25d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.553041 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.553025 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"501fa221-2089-47bd-999a-510bc80b25d6","Type":"ContainerDied","Data":"0871e5e26f0d0cbbe6ab553b4b09b2111cf5786b46e48f9a2c4ef99b5df429ea"} Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.554385 4727 scope.go:117] "RemoveContainer" containerID="c2fc8f0085fa7ff687c281ef79ba4dedd7d888ecf209e0d77ec7e12d72ac9da0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.608219 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6tjrt"] Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.613435 4727 scope.go:117] "RemoveContainer" containerID="ee630afd9cd1492631b5367f56b7ec69885aecbd27160685948f970f632ff98e" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.636353 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.659687 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.671960 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:30:35 crc kubenswrapper[4727]: E1121 20:30:35.672588 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="ceilometer-notification-agent" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.672609 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="ceilometer-notification-agent" Nov 21 20:30:35 crc kubenswrapper[4727]: E1121 20:30:35.672625 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="sg-core" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.672633 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="sg-core" Nov 21 20:30:35 crc kubenswrapper[4727]: E1121 20:30:35.672654 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="proxy-httpd" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.672661 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="proxy-httpd" Nov 21 20:30:35 crc kubenswrapper[4727]: E1121 20:30:35.672695 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="ceilometer-central-agent" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.672701 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="ceilometer-central-agent" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.672937 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="proxy-httpd" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.672960 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="ceilometer-central-agent" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.672993 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="ceilometer-notification-agent" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.673014 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="501fa221-2089-47bd-999a-510bc80b25d6" containerName="sg-core" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.676062 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.678606 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.679027 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.679120 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.686337 4727 scope.go:117] "RemoveContainer" containerID="b628742dede7af45f2ef7d7c1c8c90a83a845fccc2f73d2381c1a18cb6bf089a" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.694797 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.756280 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-scripts\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.756555 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-run-httpd\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.756671 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.756783 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjljj\" (UniqueName: \"kubernetes.io/projected/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-kube-api-access-vjljj\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.756866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-log-httpd\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.756970 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-config-data\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.757280 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.757459 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.773542 4727 scope.go:117] "RemoveContainer" containerID="563b946924c2672897e92250b5727fb306c89d6001f8724a59fccebed58c77d7" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.860364 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-scripts\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.860433 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-run-httpd\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.860474 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.860510 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjljj\" (UniqueName: \"kubernetes.io/projected/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-kube-api-access-vjljj\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.860536 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-log-httpd\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.860580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-config-data\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.860624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.860668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.861125 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-run-httpd\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.861413 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-log-httpd\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.868465 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.869898 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-scripts\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.874156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-config-data\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.877156 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.881200 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:35 crc kubenswrapper[4727]: I1121 20:30:35.884323 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjljj\" (UniqueName: \"kubernetes.io/projected/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-kube-api-access-vjljj\") pod \"ceilometer-0\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " pod="openstack/ceilometer-0" Nov 21 20:30:36 crc kubenswrapper[4727]: I1121 20:30:36.087870 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 20:30:36 crc kubenswrapper[4727]: I1121 20:30:36.570618 4727 generic.go:334] "Generic (PLEG): container finished" podID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerID="c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf" exitCode=0 Nov 21 20:30:36 crc kubenswrapper[4727]: I1121 20:30:36.570712 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjrt" event={"ID":"2362ffc5-5fd6-467a-a7f9-e3a25581f176","Type":"ContainerDied","Data":"c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf"} Nov 21 20:30:36 crc kubenswrapper[4727]: I1121 20:30:36.570872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjrt" event={"ID":"2362ffc5-5fd6-467a-a7f9-e3a25581f176","Type":"ContainerStarted","Data":"7df71519a159861e66ca5912a56e264fa09acd61cf88535dbb8af7b338fd8bc8"} Nov 21 20:30:36 crc kubenswrapper[4727]: W1121 20:30:36.611029 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92f8b6a_9ebe_4c1e_b63e_ad94ef056066.slice/crio-170f3afcf1490ec08e69672d63ec227623a487b28a0a9ba60e27aa20a063b577 WatchSource:0}: Error finding container 170f3afcf1490ec08e69672d63ec227623a487b28a0a9ba60e27aa20a063b577: Status 404 returned error can't find the container with id 170f3afcf1490ec08e69672d63ec227623a487b28a0a9ba60e27aa20a063b577 Nov 21 20:30:36 crc kubenswrapper[4727]: I1121 20:30:36.617332 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 20:30:37 crc kubenswrapper[4727]: I1121 20:30:37.513385 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="501fa221-2089-47bd-999a-510bc80b25d6" path="/var/lib/kubelet/pods/501fa221-2089-47bd-999a-510bc80b25d6/volumes" Nov 21 20:30:37 crc kubenswrapper[4727]: I1121 20:30:37.585870 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjrt" event={"ID":"2362ffc5-5fd6-467a-a7f9-e3a25581f176","Type":"ContainerStarted","Data":"f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab"} Nov 21 20:30:37 crc kubenswrapper[4727]: I1121 20:30:37.590465 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066","Type":"ContainerStarted","Data":"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86"} Nov 21 20:30:37 crc kubenswrapper[4727]: I1121 20:30:37.590768 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066","Type":"ContainerStarted","Data":"170f3afcf1490ec08e69672d63ec227623a487b28a0a9ba60e27aa20a063b577"} Nov 21 20:30:38 crc kubenswrapper[4727]: I1121 20:30:38.606598 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066","Type":"ContainerStarted","Data":"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c"} Nov 21 20:30:39 crc kubenswrapper[4727]: I1121 20:30:39.618034 4727 generic.go:334] "Generic (PLEG): container finished" podID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerID="f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab" exitCode=0 Nov 21 20:30:39 crc kubenswrapper[4727]: I1121 20:30:39.618091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjrt" event={"ID":"2362ffc5-5fd6-467a-a7f9-e3a25581f176","Type":"ContainerDied","Data":"f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab"} Nov 21 20:30:39 crc kubenswrapper[4727]: I1121 20:30:39.623725 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9e00f1d6-6b39-4c43-9459-1cc521827332","Type":"ContainerStarted","Data":"c1e6f55e6a61eb65b1905a21ead5000b9e5b47b2fb286c73233b066058f29a73"} Nov 21 20:30:39 crc kubenswrapper[4727]: I1121 20:30:39.670839 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.053550998 podStartE2EDuration="36.670818753s" podCreationTimestamp="2025-11-21 20:30:03 +0000 UTC" firstStartedPulling="2025-11-21 20:30:04.374201436 +0000 UTC m=+1409.560386480" lastFinishedPulling="2025-11-21 20:30:38.991469191 +0000 UTC m=+1444.177654235" observedRunningTime="2025-11-21 20:30:39.654479969 +0000 UTC m=+1444.840665033" watchObservedRunningTime="2025-11-21 20:30:39.670818753 +0000 UTC m=+1444.857003797" Nov 21 20:30:41 crc kubenswrapper[4727]: I1121 20:30:41.954369 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 21 20:30:48 crc kubenswrapper[4727]: E1121 20:30:48.275852 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openstack-k8s-operators/sg-core:latest: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" image="quay.io/openstack-k8s-operators/sg-core:latest" Nov 21 20:30:48 crc kubenswrapper[4727]: E1121 20:30:48.276522 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjljj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a92f8b6a-9ebe-4c1e-b63e-ad94ef056066): ErrImagePull: initializing source docker://quay.io/openstack-k8s-operators/sg-core:latest: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" logger="UnhandledError" Nov 21 20:30:49 crc kubenswrapper[4727]: E1121 20:30:49.623707 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"initializing source docker://quay.io/openstack-k8s-operators/sg-core:latest: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)\"" pod="openstack/ceilometer-0" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" Nov 21 20:30:49 crc kubenswrapper[4727]: E1121 20:30:49.729595 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: can't talk to a V1 container registry" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Nov 21 20:30:49 crc kubenswrapper[4727]: E1121 20:30:49.730037 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jsvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6tjrt_openshift-marketplace(2362ffc5-5fd6-467a-a7f9-e3a25581f176): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: can't talk to a V1 container registry" logger="UnhandledError" Nov 21 20:30:49 crc kubenswrapper[4727]: E1121 20:30:49.732096 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: can't talk to a V1 container registry\"" pod="openshift-marketplace/community-operators-6tjrt" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" Nov 21 20:30:49 crc kubenswrapper[4727]: I1121 20:30:49.742579 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066","Type":"ContainerStarted","Data":"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf"} Nov 21 20:30:49 crc kubenswrapper[4727]: I1121 20:30:49.742851 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 20:30:49 crc kubenswrapper[4727]: E1121 20:30:49.746308 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/sg-core:latest\\\"\"" pod="openstack/ceilometer-0" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" Nov 21 20:30:50 crc kubenswrapper[4727]: E1121 20:30:50.759266 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/sg-core:latest\\\"\"" pod="openstack/ceilometer-0" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" Nov 21 20:31:02 crc kubenswrapper[4727]: I1121 20:31:02.515360 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 20:31:03 crc kubenswrapper[4727]: I1121 20:31:03.892982 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066","Type":"ContainerStarted","Data":"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd"} Nov 21 20:31:03 crc kubenswrapper[4727]: I1121 20:31:03.905547 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 20:31:03 crc kubenswrapper[4727]: I1121 20:31:03.914938 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.61044444 podStartE2EDuration="28.914915602s" podCreationTimestamp="2025-11-21 20:30:35 +0000 UTC" firstStartedPulling="2025-11-21 20:30:36.614108761 +0000 UTC m=+1441.800293805" lastFinishedPulling="2025-11-21 20:31:02.918579923 +0000 UTC m=+1468.104764967" observedRunningTime="2025-11-21 20:31:03.914098792 +0000 UTC m=+1469.100283846" watchObservedRunningTime="2025-11-21 20:31:03.914915602 +0000 UTC m=+1469.101100646" Nov 21 20:31:06 crc kubenswrapper[4727]: I1121 20:31:06.940591 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjrt" event={"ID":"2362ffc5-5fd6-467a-a7f9-e3a25581f176","Type":"ContainerStarted","Data":"92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09"} Nov 21 20:31:06 crc kubenswrapper[4727]: I1121 20:31:06.965112 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6tjrt" podStartSLOduration=3.575057915 podStartE2EDuration="32.965091976s" podCreationTimestamp="2025-11-21 20:30:34 +0000 UTC" firstStartedPulling="2025-11-21 20:30:36.572716631 +0000 UTC m=+1441.758901675" lastFinishedPulling="2025-11-21 20:31:05.962750692 +0000 UTC m=+1471.148935736" observedRunningTime="2025-11-21 20:31:06.958274831 +0000 UTC m=+1472.144459905" watchObservedRunningTime="2025-11-21 20:31:06.965091976 +0000 UTC m=+1472.151277020" Nov 21 20:31:14 crc kubenswrapper[4727]: I1121 20:31:14.113714 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 20:31:15 crc kubenswrapper[4727]: I1121 20:31:15.081085 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:31:15 crc kubenswrapper[4727]: I1121 20:31:15.081138 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:31:15 crc kubenswrapper[4727]: I1121 20:31:15.431328 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 20:31:16 crc kubenswrapper[4727]: I1121 20:31:16.140337 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6tjrt" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerName="registry-server" probeResult="failure" output=< Nov 21 20:31:16 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 20:31:16 crc kubenswrapper[4727]: > Nov 21 20:31:18 crc kubenswrapper[4727]: I1121 20:31:18.638357 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="377d8548-a458-47c0-bd02-9904c8110d40" containerName="rabbitmq" containerID="cri-o://7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1" gracePeriod=604796 Nov 21 20:31:19 crc kubenswrapper[4727]: I1121 20:31:19.755588 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7fdf0962-de4d-4f58-87d3-a6458e4ff980" containerName="rabbitmq" containerID="cri-o://e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234" gracePeriod=604796 Nov 21 20:31:22 crc kubenswrapper[4727]: I1121 20:31:22.997843 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="377d8548-a458-47c0-bd02-9904c8110d40" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Nov 21 20:31:23 crc kubenswrapper[4727]: I1121 20:31:23.087579 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7fdf0962-de4d-4f58-87d3-a6458e4ff980" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.140324 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.214507 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.400500 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tjrt"] Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.594112 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.765489 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2hgc\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-kube-api-access-g2hgc\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.765559 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-config-data\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.765680 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/377d8548-a458-47c0-bd02-9904c8110d40-pod-info\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.765761 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-tls\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.765810 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/377d8548-a458-47c0-bd02-9904c8110d40-erlang-cookie-secret\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.765848 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-erlang-cookie\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.765865 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-plugins-conf\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.766006 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-server-conf\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.766035 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.766081 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-plugins\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.766106 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-confd\") pod \"377d8548-a458-47c0-bd02-9904c8110d40\" (UID: \"377d8548-a458-47c0-bd02-9904c8110d40\") " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.767409 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.767593 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.768011 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.777020 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-kube-api-access-g2hgc" (OuterVolumeSpecName: "kube-api-access-g2hgc") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "kube-api-access-g2hgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.777147 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/377d8548-a458-47c0-bd02-9904c8110d40-pod-info" (OuterVolumeSpecName: "pod-info") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.778131 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.780906 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.788279 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377d8548-a458-47c0-bd02-9904c8110d40-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.810558 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-config-data" (OuterVolumeSpecName: "config-data") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.869096 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.869130 4727 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/377d8548-a458-47c0-bd02-9904c8110d40-pod-info\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.869139 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.869149 4727 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/377d8548-a458-47c0-bd02-9904c8110d40-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.869160 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.869167 4727 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.869190 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.869199 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.869207 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2hgc\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-kube-api-access-g2hgc\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.873997 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-server-conf" (OuterVolumeSpecName: "server-conf") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.910619 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.971777 4727 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/377d8548-a458-47c0-bd02-9904c8110d40-server-conf\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.971808 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:25 crc kubenswrapper[4727]: I1121 20:31:25.986278 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "377d8548-a458-47c0-bd02-9904c8110d40" (UID: "377d8548-a458-47c0-bd02-9904c8110d40"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.073906 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/377d8548-a458-47c0-bd02-9904c8110d40-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.141432 4727 generic.go:334] "Generic (PLEG): container finished" podID="377d8548-a458-47c0-bd02-9904c8110d40" containerID="7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1" exitCode=0 Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.141488 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"377d8548-a458-47c0-bd02-9904c8110d40","Type":"ContainerDied","Data":"7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1"} Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.141519 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.141536 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"377d8548-a458-47c0-bd02-9904c8110d40","Type":"ContainerDied","Data":"acd2fc25c5a5ee64561dfad705e8ecb57cdfebd9dd90410833b2de95b4ca193a"} Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.141567 4727 scope.go:117] "RemoveContainer" containerID="7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.272648 4727 scope.go:117] "RemoveContainer" containerID="510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.282527 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.301468 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.348339 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 20:31:26 crc kubenswrapper[4727]: E1121 20:31:26.348985 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377d8548-a458-47c0-bd02-9904c8110d40" containerName="rabbitmq" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.349004 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="377d8548-a458-47c0-bd02-9904c8110d40" containerName="rabbitmq" Nov 21 20:31:26 crc kubenswrapper[4727]: E1121 20:31:26.349041 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377d8548-a458-47c0-bd02-9904c8110d40" containerName="setup-container" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.349050 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="377d8548-a458-47c0-bd02-9904c8110d40" containerName="setup-container" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.349383 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="377d8548-a458-47c0-bd02-9904c8110d40" containerName="rabbitmq" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.351024 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.353280 4727 scope.go:117] "RemoveContainer" containerID="7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1" Nov 21 20:31:26 crc kubenswrapper[4727]: E1121 20:31:26.355947 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1\": container with ID starting with 7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1 not found: ID does not exist" containerID="7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.356005 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1"} err="failed to get container status \"7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1\": rpc error: code = NotFound desc = could not find container \"7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1\": container with ID starting with 7ae51f97094b081bca7a8688e0e987d5c172f8e4db19ef08a00c7feb59d2fad1 not found: ID does not exist" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.356035 4727 scope.go:117] "RemoveContainer" containerID="510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.358385 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.358431 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.358384 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.358651 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.358760 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.358793 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.359222 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-c55kb" Nov 21 20:31:26 crc kubenswrapper[4727]: E1121 20:31:26.363021 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2\": container with ID starting with 510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2 not found: ID does not exist" containerID="510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.363119 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2"} err="failed to get container status \"510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2\": rpc error: code = NotFound desc = could not find container \"510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2\": container with ID starting with 510940682d0d6a54421c5fcafc52c7deab4e7c172ef1e5d0680ab402b9779eb2 not found: ID does not exist" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.417803 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485186 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485368 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485394 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485497 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6thx\" (UniqueName: \"kubernetes.io/projected/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-kube-api-access-m6thx\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485554 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485634 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485717 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485781 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485821 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485839 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.485866 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588372 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588445 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588482 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588506 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588544 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588585 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588695 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588715 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588764 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6thx\" (UniqueName: \"kubernetes.io/projected/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-kube-api-access-m6thx\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.588791 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.589199 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.589286 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.589900 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.589942 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.591639 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.594523 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.595057 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.596228 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.596368 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.596474 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.610761 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6thx\" (UniqueName: \"kubernetes.io/projected/13e6ebe1-eaee-49f1-9b47-6ec82055a8b6-kube-api-access-m6thx\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.635577 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6\") " pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.667578 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.679220 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.792342 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-server-conf\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.792801 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-erlang-cookie\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.792868 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-confd\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.792949 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7fdf0962-de4d-4f58-87d3-a6458e4ff980-pod-info\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.793044 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.793079 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-plugins-conf\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.793099 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-config-data\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.793172 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-plugins\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.793217 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-tls\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.793245 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7fdf0962-de4d-4f58-87d3-a6458e4ff980-erlang-cookie-secret\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.793269 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn55s\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-kube-api-access-vn55s\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.804026 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.805352 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.807084 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.808415 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.809861 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7fdf0962-de4d-4f58-87d3-a6458e4ff980-pod-info" (OuterVolumeSpecName: "pod-info") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.823080 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-kube-api-access-vn55s" (OuterVolumeSpecName: "kube-api-access-vn55s") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "kube-api-access-vn55s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.824280 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.827770 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fdf0962-de4d-4f58-87d3-a6458e4ff980-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.908435 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.908465 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.908478 4727 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7fdf0962-de4d-4f58-87d3-a6458e4ff980-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.908491 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn55s\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-kube-api-access-vn55s\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.908505 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.908516 4727 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7fdf0962-de4d-4f58-87d3-a6458e4ff980-pod-info\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.909244 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.910574 4727 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:26 crc kubenswrapper[4727]: I1121 20:31:26.947639 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.014189 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.031522 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-config-data" (OuterVolumeSpecName: "config-data") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.120148 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-server-conf" (OuterVolumeSpecName: "server-conf") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.120362 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-server-conf\") pod \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\" (UID: \"7fdf0962-de4d-4f58-87d3-a6458e4ff980\") " Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.120890 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:27 crc kubenswrapper[4727]: W1121 20:31:27.121011 4727 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7fdf0962-de4d-4f58-87d3-a6458e4ff980/volumes/kubernetes.io~configmap/server-conf Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.121031 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-server-conf" (OuterVolumeSpecName: "server-conf") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.130867 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7fdf0962-de4d-4f58-87d3-a6458e4ff980" (UID: "7fdf0962-de4d-4f58-87d3-a6458e4ff980"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.158079 4727 generic.go:334] "Generic (PLEG): container finished" podID="7fdf0962-de4d-4f58-87d3-a6458e4ff980" containerID="e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234" exitCode=0 Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.158386 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.158265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7fdf0962-de4d-4f58-87d3-a6458e4ff980","Type":"ContainerDied","Data":"e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234"} Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.158477 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7fdf0962-de4d-4f58-87d3-a6458e4ff980","Type":"ContainerDied","Data":"1bd8f5195b78bd84a76e189ce623ccc459f059599030b216ee6e3b625235219d"} Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.158498 4727 scope.go:117] "RemoveContainer" containerID="e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.159272 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6tjrt" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerName="registry-server" containerID="cri-o://92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09" gracePeriod=2 Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.202247 4727 scope.go:117] "RemoveContainer" containerID="1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.222312 4727 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7fdf0962-de4d-4f58-87d3-a6458e4ff980-server-conf\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.222340 4727 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7fdf0962-de4d-4f58-87d3-a6458e4ff980-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.412162 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.437019 4727 scope.go:117] "RemoveContainer" containerID="e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234" Nov 21 20:31:27 crc kubenswrapper[4727]: E1121 20:31:27.439857 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234\": container with ID starting with e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234 not found: ID does not exist" containerID="e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.439907 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234"} err="failed to get container status \"e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234\": rpc error: code = NotFound desc = could not find container \"e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234\": container with ID starting with e32dc94e4f21e6b749466043c0d6ea4cf39dc9f8ed58122cfa0c27c84d7f3234 not found: ID does not exist" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.440717 4727 scope.go:117] "RemoveContainer" containerID="1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa" Nov 21 20:31:27 crc kubenswrapper[4727]: E1121 20:31:27.449248 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa\": container with ID starting with 1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa not found: ID does not exist" containerID="1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.449304 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa"} err="failed to get container status \"1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa\": rpc error: code = NotFound desc = could not find container \"1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa\": container with ID starting with 1dcba278f2a6bdc24a7a739c9adfdd59cbdf5803ac695e1fccac73fad26f31fa not found: ID does not exist" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.523607 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="377d8548-a458-47c0-bd02-9904c8110d40" path="/var/lib/kubelet/pods/377d8548-a458-47c0-bd02-9904c8110d40/volumes" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.582252 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.618097 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.626498 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 20:31:27 crc kubenswrapper[4727]: E1121 20:31:27.627081 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdf0962-de4d-4f58-87d3-a6458e4ff980" containerName="setup-container" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.627093 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdf0962-de4d-4f58-87d3-a6458e4ff980" containerName="setup-container" Nov 21 20:31:27 crc kubenswrapper[4727]: E1121 20:31:27.627125 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdf0962-de4d-4f58-87d3-a6458e4ff980" containerName="rabbitmq" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.627131 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdf0962-de4d-4f58-87d3-a6458e4ff980" containerName="rabbitmq" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.627340 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdf0962-de4d-4f58-87d3-a6458e4ff980" containerName="rabbitmq" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.628668 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.631542 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.631796 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.631922 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.632129 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.632498 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-6ht8p" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.636206 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.637003 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.649637 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.734318 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.738912 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf901ecd-b37c-4f57-9c80-863e2d949f5f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.739165 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.739275 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf901ecd-b37c-4f57-9c80-863e2d949f5f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.739388 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n2b8\" (UniqueName: \"kubernetes.io/projected/cf901ecd-b37c-4f57-9c80-863e2d949f5f-kube-api-access-7n2b8\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.739487 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf901ecd-b37c-4f57-9c80-863e2d949f5f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.739712 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.739836 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf901ecd-b37c-4f57-9c80-863e2d949f5f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.740005 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf901ecd-b37c-4f57-9c80-863e2d949f5f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.740145 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.740261 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.740402 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.842016 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-utilities\") pod \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.842415 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-catalog-content\") pod \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.842641 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jsvr\" (UniqueName: \"kubernetes.io/projected/2362ffc5-5fd6-467a-a7f9-e3a25581f176-kube-api-access-4jsvr\") pod \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\" (UID: \"2362ffc5-5fd6-467a-a7f9-e3a25581f176\") " Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.842682 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-utilities" (OuterVolumeSpecName: "utilities") pod "2362ffc5-5fd6-467a-a7f9-e3a25581f176" (UID: "2362ffc5-5fd6-467a-a7f9-e3a25581f176"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.842903 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf901ecd-b37c-4f57-9c80-863e2d949f5f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.842952 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.842990 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.843030 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.843113 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf901ecd-b37c-4f57-9c80-863e2d949f5f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.843157 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.843175 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf901ecd-b37c-4f57-9c80-863e2d949f5f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.843196 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n2b8\" (UniqueName: \"kubernetes.io/projected/cf901ecd-b37c-4f57-9c80-863e2d949f5f-kube-api-access-7n2b8\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.843214 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf901ecd-b37c-4f57-9c80-863e2d949f5f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.843241 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.843265 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf901ecd-b37c-4f57-9c80-863e2d949f5f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.843351 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.844293 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.844788 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.845113 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf901ecd-b37c-4f57-9c80-863e2d949f5f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.845472 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.845790 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf901ecd-b37c-4f57-9c80-863e2d949f5f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.846190 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf901ecd-b37c-4f57-9c80-863e2d949f5f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.849472 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.849628 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf901ecd-b37c-4f57-9c80-863e2d949f5f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.850179 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2362ffc5-5fd6-467a-a7f9-e3a25581f176-kube-api-access-4jsvr" (OuterVolumeSpecName: "kube-api-access-4jsvr") pod "2362ffc5-5fd6-467a-a7f9-e3a25581f176" (UID: "2362ffc5-5fd6-467a-a7f9-e3a25581f176"). InnerVolumeSpecName "kube-api-access-4jsvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.853357 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf901ecd-b37c-4f57-9c80-863e2d949f5f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.860514 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf901ecd-b37c-4f57-9c80-863e2d949f5f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.868689 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n2b8\" (UniqueName: \"kubernetes.io/projected/cf901ecd-b37c-4f57-9c80-863e2d949f5f-kube-api-access-7n2b8\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.882631 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf901ecd-b37c-4f57-9c80-863e2d949f5f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.904022 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2362ffc5-5fd6-467a-a7f9-e3a25581f176" (UID: "2362ffc5-5fd6-467a-a7f9-e3a25581f176"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.945771 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jsvr\" (UniqueName: \"kubernetes.io/projected/2362ffc5-5fd6-467a-a7f9-e3a25581f176-kube-api-access-4jsvr\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.945804 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2362ffc5-5fd6-467a-a7f9-e3a25581f176-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:27 crc kubenswrapper[4727]: I1121 20:31:27.962219 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.175908 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6","Type":"ContainerStarted","Data":"f03c0998595ea4328170785f9c54916c390b5d847f31b2b3f0f9b02fec2a5956"} Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.179906 4727 generic.go:334] "Generic (PLEG): container finished" podID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerID="92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09" exitCode=0 Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.179974 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjrt" event={"ID":"2362ffc5-5fd6-467a-a7f9-e3a25581f176","Type":"ContainerDied","Data":"92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09"} Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.180008 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6tjrt" event={"ID":"2362ffc5-5fd6-467a-a7f9-e3a25581f176","Type":"ContainerDied","Data":"7df71519a159861e66ca5912a56e264fa09acd61cf88535dbb8af7b338fd8bc8"} Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.180027 4727 scope.go:117] "RemoveContainer" containerID="92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09" Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.180172 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6tjrt" Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.290706 4727 scope.go:117] "RemoveContainer" containerID="f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab" Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.306567 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6tjrt"] Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.317031 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6tjrt"] Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.332559 4727 scope.go:117] "RemoveContainer" containerID="c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf" Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.372132 4727 scope.go:117] "RemoveContainer" containerID="92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09" Nov 21 20:31:28 crc kubenswrapper[4727]: E1121 20:31:28.372679 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09\": container with ID starting with 92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09 not found: ID does not exist" containerID="92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09" Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.372733 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09"} err="failed to get container status \"92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09\": rpc error: code = NotFound desc = could not find container \"92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09\": container with ID starting with 92be39b8d5829039087790210f7ffdd4830c9ae3c64f3150cdd5726e09302e09 not found: ID does not exist" Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.372798 4727 scope.go:117] "RemoveContainer" containerID="f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab" Nov 21 20:31:28 crc kubenswrapper[4727]: E1121 20:31:28.373884 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab\": container with ID starting with f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab not found: ID does not exist" containerID="f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab" Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.373923 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab"} err="failed to get container status \"f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab\": rpc error: code = NotFound desc = could not find container \"f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab\": container with ID starting with f39eef34280e73f19788e469803b79cdd8bc62c3f4f7a382aad1e83a30c6fcab not found: ID does not exist" Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.373948 4727 scope.go:117] "RemoveContainer" containerID="c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf" Nov 21 20:31:28 crc kubenswrapper[4727]: E1121 20:31:28.374344 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf\": container with ID starting with c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf not found: ID does not exist" containerID="c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf" Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.374400 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf"} err="failed to get container status \"c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf\": rpc error: code = NotFound desc = could not find container \"c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf\": container with ID starting with c90f9617e825d3f049a6fcd2a968fad0c9d7348398f045ba2ae64943714e53bf not found: ID does not exist" Nov 21 20:31:28 crc kubenswrapper[4727]: W1121 20:31:28.567296 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf901ecd_b37c_4f57_9c80_863e2d949f5f.slice/crio-a1345facc401adfd34a71dad370f27ab0401222a53e655bf9d3eb2d7acd3208b WatchSource:0}: Error finding container a1345facc401adfd34a71dad370f27ab0401222a53e655bf9d3eb2d7acd3208b: Status 404 returned error can't find the container with id a1345facc401adfd34a71dad370f27ab0401222a53e655bf9d3eb2d7acd3208b Nov 21 20:31:28 crc kubenswrapper[4727]: I1121 20:31:28.568773 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.193817 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf901ecd-b37c-4f57-9c80-863e2d949f5f","Type":"ContainerStarted","Data":"a1345facc401adfd34a71dad370f27ab0401222a53e655bf9d3eb2d7acd3208b"} Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.512506 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" path="/var/lib/kubelet/pods/2362ffc5-5fd6-467a-a7f9-e3a25581f176/volumes" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.514715 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fdf0962-de4d-4f58-87d3-a6458e4ff980" path="/var/lib/kubelet/pods/7fdf0962-de4d-4f58-87d3-a6458e4ff980/volumes" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.720136 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dmv56"] Nov 21 20:31:29 crc kubenswrapper[4727]: E1121 20:31:29.720750 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerName="extract-content" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.720781 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerName="extract-content" Nov 21 20:31:29 crc kubenswrapper[4727]: E1121 20:31:29.720812 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerName="registry-server" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.720821 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerName="registry-server" Nov 21 20:31:29 crc kubenswrapper[4727]: E1121 20:31:29.720854 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerName="extract-utilities" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.720862 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerName="extract-utilities" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.721166 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2362ffc5-5fd6-467a-a7f9-e3a25581f176" containerName="registry-server" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.722766 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.728340 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.735998 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dmv56"] Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.901292 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.901351 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.901702 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pccmz\" (UniqueName: \"kubernetes.io/projected/eaef650c-6dd0-4987-8d25-9b8de774794c-kube-api-access-pccmz\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.901833 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.902046 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-config\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.902119 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:29 crc kubenswrapper[4727]: I1121 20:31:29.902197 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.004094 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pccmz\" (UniqueName: \"kubernetes.io/projected/eaef650c-6dd0-4987-8d25-9b8de774794c-kube-api-access-pccmz\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.004173 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.004235 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-config\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.004300 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.004345 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.004447 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.004486 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.005407 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.005672 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.005740 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.005992 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-config\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.006120 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.006128 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.037775 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pccmz\" (UniqueName: \"kubernetes.io/projected/eaef650c-6dd0-4987-8d25-9b8de774794c-kube-api-access-pccmz\") pod \"dnsmasq-dns-5b75489c6f-dmv56\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.044664 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.243104 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6","Type":"ContainerStarted","Data":"64b1cae076edbc51ac8fa9b6df46e26f17f64164f0f58d92c0f89a7fd141cb8c"} Nov 21 20:31:30 crc kubenswrapper[4727]: I1121 20:31:30.695951 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dmv56"] Nov 21 20:31:31 crc kubenswrapper[4727]: I1121 20:31:31.263619 4727 generic.go:334] "Generic (PLEG): container finished" podID="eaef650c-6dd0-4987-8d25-9b8de774794c" containerID="a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4" exitCode=0 Nov 21 20:31:31 crc kubenswrapper[4727]: I1121 20:31:31.263841 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" event={"ID":"eaef650c-6dd0-4987-8d25-9b8de774794c","Type":"ContainerDied","Data":"a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4"} Nov 21 20:31:31 crc kubenswrapper[4727]: I1121 20:31:31.264183 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" event={"ID":"eaef650c-6dd0-4987-8d25-9b8de774794c","Type":"ContainerStarted","Data":"99da8298d436870de134301da81bdb1b20b43484dab1e982ba17b94ceb0ab152"} Nov 21 20:31:31 crc kubenswrapper[4727]: I1121 20:31:31.270083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf901ecd-b37c-4f57-9c80-863e2d949f5f","Type":"ContainerStarted","Data":"d7b9e9c0654355ed74af9071f7c6c57d7071e861ed709e41fadde7c39a17da98"} Nov 21 20:31:32 crc kubenswrapper[4727]: I1121 20:31:32.282459 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" event={"ID":"eaef650c-6dd0-4987-8d25-9b8de774794c","Type":"ContainerStarted","Data":"3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f"} Nov 21 20:31:33 crc kubenswrapper[4727]: I1121 20:31:33.292378 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.046044 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.064520 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" podStartSLOduration=11.064501545 podStartE2EDuration="11.064501545s" podCreationTimestamp="2025-11-21 20:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:31:32.300869337 +0000 UTC m=+1497.487054381" watchObservedRunningTime="2025-11-21 20:31:40.064501545 +0000 UTC m=+1505.250686589" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.113264 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dmspm"] Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.113542 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" podUID="8ae1103e-a174-4ed8-9f4b-726eb2198bb7" containerName="dnsmasq-dns" containerID="cri-o://fe9a61ebed3ed86566df9afed4b9fc9b75c3c6e801f108da3b00dac2c1550a72" gracePeriod=10 Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.306396 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-gz7cn"] Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.308371 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.338651 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-gz7cn"] Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.353741 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.353827 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-config\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.353916 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.354009 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.354028 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwhjx\" (UniqueName: \"kubernetes.io/projected/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-kube-api-access-qwhjx\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.354052 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.354096 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.402518 4727 generic.go:334] "Generic (PLEG): container finished" podID="8ae1103e-a174-4ed8-9f4b-726eb2198bb7" containerID="fe9a61ebed3ed86566df9afed4b9fc9b75c3c6e801f108da3b00dac2c1550a72" exitCode=0 Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.402558 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" event={"ID":"8ae1103e-a174-4ed8-9f4b-726eb2198bb7","Type":"ContainerDied","Data":"fe9a61ebed3ed86566df9afed4b9fc9b75c3c6e801f108da3b00dac2c1550a72"} Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.459527 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.459600 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.459651 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.459708 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-config\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.459827 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.459898 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.459916 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwhjx\" (UniqueName: \"kubernetes.io/projected/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-kube-api-access-qwhjx\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.460866 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.460913 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.460949 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-config\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.461418 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.461438 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.462558 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.490014 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwhjx\" (UniqueName: \"kubernetes.io/projected/e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f-kube-api-access-qwhjx\") pod \"dnsmasq-dns-5d75f767dc-gz7cn\" (UID: \"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f\") " pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.626784 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.844013 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.876402 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-config\") pod \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.876449 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-swift-storage-0\") pod \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.876496 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-sb\") pod \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.876534 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-nb\") pod \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.876712 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-svc\") pod \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.876849 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rk9r\" (UniqueName: \"kubernetes.io/projected/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-kube-api-access-7rk9r\") pod \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\" (UID: \"8ae1103e-a174-4ed8-9f4b-726eb2198bb7\") " Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.905876 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-kube-api-access-7rk9r" (OuterVolumeSpecName: "kube-api-access-7rk9r") pod "8ae1103e-a174-4ed8-9f4b-726eb2198bb7" (UID: "8ae1103e-a174-4ed8-9f4b-726eb2198bb7"). InnerVolumeSpecName "kube-api-access-7rk9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.976811 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-config" (OuterVolumeSpecName: "config") pod "8ae1103e-a174-4ed8-9f4b-726eb2198bb7" (UID: "8ae1103e-a174-4ed8-9f4b-726eb2198bb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.979877 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rk9r\" (UniqueName: \"kubernetes.io/projected/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-kube-api-access-7rk9r\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.979900 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:40 crc kubenswrapper[4727]: I1121 20:31:40.996646 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ae1103e-a174-4ed8-9f4b-726eb2198bb7" (UID: "8ae1103e-a174-4ed8-9f4b-726eb2198bb7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.032706 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ae1103e-a174-4ed8-9f4b-726eb2198bb7" (UID: "8ae1103e-a174-4ed8-9f4b-726eb2198bb7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.043711 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ae1103e-a174-4ed8-9f4b-726eb2198bb7" (UID: "8ae1103e-a174-4ed8-9f4b-726eb2198bb7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.072322 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ae1103e-a174-4ed8-9f4b-726eb2198bb7" (UID: "8ae1103e-a174-4ed8-9f4b-726eb2198bb7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.083821 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.083871 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.083888 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.083898 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ae1103e-a174-4ed8-9f4b-726eb2198bb7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.343529 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-gz7cn"] Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.428050 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" event={"ID":"8ae1103e-a174-4ed8-9f4b-726eb2198bb7","Type":"ContainerDied","Data":"ab1442bf8ee47f6bed6f9f9f613f3c8d83a7f25d74b43d81b2123047fe06dd3b"} Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.428940 4727 scope.go:117] "RemoveContainer" containerID="fe9a61ebed3ed86566df9afed4b9fc9b75c3c6e801f108da3b00dac2c1550a72" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.428313 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-dmspm" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.430687 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" event={"ID":"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f","Type":"ContainerStarted","Data":"b633529ba32696185d20b1ccdd7fc9dd1246cd1c8bf1ee3c55af95a31de5469f"} Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.477527 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dmspm"] Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.486991 4727 scope.go:117] "RemoveContainer" containerID="97ad06ae799108662d96447ce658da9250c0e6a78b7c4718b0110f9b5785ed53" Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.493602 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dmspm"] Nov 21 20:31:41 crc kubenswrapper[4727]: I1121 20:31:41.513129 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ae1103e-a174-4ed8-9f4b-726eb2198bb7" path="/var/lib/kubelet/pods/8ae1103e-a174-4ed8-9f4b-726eb2198bb7/volumes" Nov 21 20:31:42 crc kubenswrapper[4727]: I1121 20:31:42.443453 4727 generic.go:334] "Generic (PLEG): container finished" podID="e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f" containerID="1d3b2a7da4243d866fb24d09b9467a7010ad4c70b0b3ea3edc571acb45aea570" exitCode=0 Nov 21 20:31:42 crc kubenswrapper[4727]: I1121 20:31:42.443492 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" event={"ID":"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f","Type":"ContainerDied","Data":"1d3b2a7da4243d866fb24d09b9467a7010ad4c70b0b3ea3edc571acb45aea570"} Nov 21 20:31:43 crc kubenswrapper[4727]: I1121 20:31:43.462733 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" event={"ID":"e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f","Type":"ContainerStarted","Data":"c8e06866241affc70cbe46e9f487a57a12195ebc5ff6628d6c61604f109554b2"} Nov 21 20:31:43 crc kubenswrapper[4727]: I1121 20:31:43.462925 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:43 crc kubenswrapper[4727]: I1121 20:31:43.491358 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" podStartSLOduration=3.491341388 podStartE2EDuration="3.491341388s" podCreationTimestamp="2025-11-21 20:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:31:43.488462269 +0000 UTC m=+1508.674647313" watchObservedRunningTime="2025-11-21 20:31:43.491341388 +0000 UTC m=+1508.677526432" Nov 21 20:31:50 crc kubenswrapper[4727]: I1121 20:31:50.629151 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-gz7cn" Nov 21 20:31:50 crc kubenswrapper[4727]: I1121 20:31:50.702407 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dmv56"] Nov 21 20:31:50 crc kubenswrapper[4727]: I1121 20:31:50.703013 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" podUID="eaef650c-6dd0-4987-8d25-9b8de774794c" containerName="dnsmasq-dns" containerID="cri-o://3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f" gracePeriod=10 Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.296720 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.354059 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-swift-storage-0\") pod \"eaef650c-6dd0-4987-8d25-9b8de774794c\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.354198 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-svc\") pod \"eaef650c-6dd0-4987-8d25-9b8de774794c\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.354276 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-openstack-edpm-ipam\") pod \"eaef650c-6dd0-4987-8d25-9b8de774794c\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.354302 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-sb\") pod \"eaef650c-6dd0-4987-8d25-9b8de774794c\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.354438 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pccmz\" (UniqueName: \"kubernetes.io/projected/eaef650c-6dd0-4987-8d25-9b8de774794c-kube-api-access-pccmz\") pod \"eaef650c-6dd0-4987-8d25-9b8de774794c\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.354538 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-nb\") pod \"eaef650c-6dd0-4987-8d25-9b8de774794c\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.354576 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-config\") pod \"eaef650c-6dd0-4987-8d25-9b8de774794c\" (UID: \"eaef650c-6dd0-4987-8d25-9b8de774794c\") " Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.369798 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaef650c-6dd0-4987-8d25-9b8de774794c-kube-api-access-pccmz" (OuterVolumeSpecName: "kube-api-access-pccmz") pod "eaef650c-6dd0-4987-8d25-9b8de774794c" (UID: "eaef650c-6dd0-4987-8d25-9b8de774794c"). InnerVolumeSpecName "kube-api-access-pccmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.447154 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eaef650c-6dd0-4987-8d25-9b8de774794c" (UID: "eaef650c-6dd0-4987-8d25-9b8de774794c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.449277 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-config" (OuterVolumeSpecName: "config") pod "eaef650c-6dd0-4987-8d25-9b8de774794c" (UID: "eaef650c-6dd0-4987-8d25-9b8de774794c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.452412 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eaef650c-6dd0-4987-8d25-9b8de774794c" (UID: "eaef650c-6dd0-4987-8d25-9b8de774794c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.457172 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pccmz\" (UniqueName: \"kubernetes.io/projected/eaef650c-6dd0-4987-8d25-9b8de774794c-kube-api-access-pccmz\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.457204 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.457218 4727 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-config\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.457230 4727 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.462173 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "eaef650c-6dd0-4987-8d25-9b8de774794c" (UID: "eaef650c-6dd0-4987-8d25-9b8de774794c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.484530 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eaef650c-6dd0-4987-8d25-9b8de774794c" (UID: "eaef650c-6dd0-4987-8d25-9b8de774794c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.486320 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eaef650c-6dd0-4987-8d25-9b8de774794c" (UID: "eaef650c-6dd0-4987-8d25-9b8de774794c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.552903 4727 generic.go:334] "Generic (PLEG): container finished" podID="eaef650c-6dd0-4987-8d25-9b8de774794c" containerID="3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f" exitCode=0 Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.552946 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" event={"ID":"eaef650c-6dd0-4987-8d25-9b8de774794c","Type":"ContainerDied","Data":"3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f"} Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.552985 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" event={"ID":"eaef650c-6dd0-4987-8d25-9b8de774794c","Type":"ContainerDied","Data":"99da8298d436870de134301da81bdb1b20b43484dab1e982ba17b94ceb0ab152"} Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.553002 4727 scope.go:117] "RemoveContainer" containerID="3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.553010 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-dmv56" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.558721 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.558751 4727 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.558763 4727 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eaef650c-6dd0-4987-8d25-9b8de774794c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.593216 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dmv56"] Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.599283 4727 scope.go:117] "RemoveContainer" containerID="a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.611040 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-dmv56"] Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.620778 4727 scope.go:117] "RemoveContainer" containerID="3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f" Nov 21 20:31:51 crc kubenswrapper[4727]: E1121 20:31:51.621284 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f\": container with ID starting with 3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f not found: ID does not exist" containerID="3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.621339 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f"} err="failed to get container status \"3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f\": rpc error: code = NotFound desc = could not find container \"3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f\": container with ID starting with 3b19e061db3195f0124776f1c5579d9566bac1b0004cffffbc31125bf35d793f not found: ID does not exist" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.621370 4727 scope.go:117] "RemoveContainer" containerID="a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4" Nov 21 20:31:51 crc kubenswrapper[4727]: E1121 20:31:51.621697 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4\": container with ID starting with a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4 not found: ID does not exist" containerID="a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4" Nov 21 20:31:51 crc kubenswrapper[4727]: I1121 20:31:51.621736 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4"} err="failed to get container status \"a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4\": rpc error: code = NotFound desc = could not find container \"a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4\": container with ID starting with a7a6b396c389dd68ca6f0fdb70008b2ab2ce2f40faa7b0151e6e20b34b0cf4b4 not found: ID does not exist" Nov 21 20:31:53 crc kubenswrapper[4727]: I1121 20:31:53.514273 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaef650c-6dd0-4987-8d25-9b8de774794c" path="/var/lib/kubelet/pods/eaef650c-6dd0-4987-8d25-9b8de774794c/volumes" Nov 21 20:32:01 crc kubenswrapper[4727]: I1121 20:32:01.666744 4727 generic.go:334] "Generic (PLEG): container finished" podID="13e6ebe1-eaee-49f1-9b47-6ec82055a8b6" containerID="64b1cae076edbc51ac8fa9b6df46e26f17f64164f0f58d92c0f89a7fd141cb8c" exitCode=0 Nov 21 20:32:01 crc kubenswrapper[4727]: I1121 20:32:01.667311 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6","Type":"ContainerDied","Data":"64b1cae076edbc51ac8fa9b6df46e26f17f64164f0f58d92c0f89a7fd141cb8c"} Nov 21 20:32:02 crc kubenswrapper[4727]: I1121 20:32:02.687282 4727 generic.go:334] "Generic (PLEG): container finished" podID="cf901ecd-b37c-4f57-9c80-863e2d949f5f" containerID="d7b9e9c0654355ed74af9071f7c6c57d7071e861ed709e41fadde7c39a17da98" exitCode=0 Nov 21 20:32:02 crc kubenswrapper[4727]: I1121 20:32:02.687334 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf901ecd-b37c-4f57-9c80-863e2d949f5f","Type":"ContainerDied","Data":"d7b9e9c0654355ed74af9071f7c6c57d7071e861ed709e41fadde7c39a17da98"} Nov 21 20:32:02 crc kubenswrapper[4727]: I1121 20:32:02.695776 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"13e6ebe1-eaee-49f1-9b47-6ec82055a8b6","Type":"ContainerStarted","Data":"f076b2818b3795dd2f416e78894cb42dcaf3f9e40be3c6d9b7b5a841a44b353e"} Nov 21 20:32:02 crc kubenswrapper[4727]: I1121 20:32:02.696065 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 21 20:32:02 crc kubenswrapper[4727]: I1121 20:32:02.768668 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.768645616 podStartE2EDuration="36.768645616s" podCreationTimestamp="2025-11-21 20:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:32:02.741442279 +0000 UTC m=+1527.927627323" watchObservedRunningTime="2025-11-21 20:32:02.768645616 +0000 UTC m=+1527.954830680" Nov 21 20:32:03 crc kubenswrapper[4727]: I1121 20:32:03.707744 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf901ecd-b37c-4f57-9c80-863e2d949f5f","Type":"ContainerStarted","Data":"f4bcdcddcc8423b085d885bebf2a0f841c831de29d6aea124a8e3c7d6011230a"} Nov 21 20:32:03 crc kubenswrapper[4727]: I1121 20:32:03.708374 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:32:03 crc kubenswrapper[4727]: I1121 20:32:03.737678 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.737658704 podStartE2EDuration="36.737658704s" podCreationTimestamp="2025-11-21 20:31:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 20:32:03.731829854 +0000 UTC m=+1528.918014898" watchObservedRunningTime="2025-11-21 20:32:03.737658704 +0000 UTC m=+1528.923843758" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.439674 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44"] Nov 21 20:32:05 crc kubenswrapper[4727]: E1121 20:32:05.440328 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae1103e-a174-4ed8-9f4b-726eb2198bb7" containerName="init" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.440342 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae1103e-a174-4ed8-9f4b-726eb2198bb7" containerName="init" Nov 21 20:32:05 crc kubenswrapper[4727]: E1121 20:32:05.440395 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaef650c-6dd0-4987-8d25-9b8de774794c" containerName="dnsmasq-dns" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.440401 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaef650c-6dd0-4987-8d25-9b8de774794c" containerName="dnsmasq-dns" Nov 21 20:32:05 crc kubenswrapper[4727]: E1121 20:32:05.440416 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaef650c-6dd0-4987-8d25-9b8de774794c" containerName="init" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.440422 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaef650c-6dd0-4987-8d25-9b8de774794c" containerName="init" Nov 21 20:32:05 crc kubenswrapper[4727]: E1121 20:32:05.440435 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae1103e-a174-4ed8-9f4b-726eb2198bb7" containerName="dnsmasq-dns" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.440440 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae1103e-a174-4ed8-9f4b-726eb2198bb7" containerName="dnsmasq-dns" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.440677 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaef650c-6dd0-4987-8d25-9b8de774794c" containerName="dnsmasq-dns" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.440691 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae1103e-a174-4ed8-9f4b-726eb2198bb7" containerName="dnsmasq-dns" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.441544 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.443457 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.443692 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.443714 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.443825 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.451441 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44"] Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.633994 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.634051 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.634084 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqfc\" (UniqueName: \"kubernetes.io/projected/88e243c4-68b1-4319-8736-a5c650da03a0-kube-api-access-rlqfc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.634203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.736508 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.736647 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.736673 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.736702 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlqfc\" (UniqueName: \"kubernetes.io/projected/88e243c4-68b1-4319-8736-a5c650da03a0-kube-api-access-rlqfc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.742643 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.758480 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.759530 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:05 crc kubenswrapper[4727]: I1121 20:32:05.774621 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlqfc\" (UniqueName: \"kubernetes.io/projected/88e243c4-68b1-4319-8736-a5c650da03a0-kube-api-access-rlqfc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:06 crc kubenswrapper[4727]: I1121 20:32:06.065107 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:06 crc kubenswrapper[4727]: I1121 20:32:06.953170 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44"] Nov 21 20:32:06 crc kubenswrapper[4727]: I1121 20:32:06.958261 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 20:32:07 crc kubenswrapper[4727]: I1121 20:32:07.756941 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" event={"ID":"88e243c4-68b1-4319-8736-a5c650da03a0","Type":"ContainerStarted","Data":"daf3602943fd821fc93ea33c1aa0a7b5366c9a96d6c05f958b1a083c02999c77"} Nov 21 20:32:13 crc kubenswrapper[4727]: I1121 20:32:13.335453 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:32:13 crc kubenswrapper[4727]: I1121 20:32:13.336059 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:32:16 crc kubenswrapper[4727]: I1121 20:32:16.683368 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 21 20:32:17 crc kubenswrapper[4727]: I1121 20:32:17.883247 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" event={"ID":"88e243c4-68b1-4319-8736-a5c650da03a0","Type":"ContainerStarted","Data":"316f211905dfdc2491fe1a5a00937170f7112ddaf737c9d1bccccff1ba43807e"} Nov 21 20:32:17 crc kubenswrapper[4727]: I1121 20:32:17.909196 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" podStartSLOduration=2.40884303 podStartE2EDuration="12.909178949s" podCreationTimestamp="2025-11-21 20:32:05 +0000 UTC" firstStartedPulling="2025-11-21 20:32:06.958013589 +0000 UTC m=+1532.144198633" lastFinishedPulling="2025-11-21 20:32:17.458349508 +0000 UTC m=+1542.644534552" observedRunningTime="2025-11-21 20:32:17.896824841 +0000 UTC m=+1543.083009885" watchObservedRunningTime="2025-11-21 20:32:17.909178949 +0000 UTC m=+1543.095363993" Nov 21 20:32:17 crc kubenswrapper[4727]: I1121 20:32:17.965132 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 21 20:32:30 crc kubenswrapper[4727]: I1121 20:32:30.023771 4727 generic.go:334] "Generic (PLEG): container finished" podID="88e243c4-68b1-4319-8736-a5c650da03a0" containerID="316f211905dfdc2491fe1a5a00937170f7112ddaf737c9d1bccccff1ba43807e" exitCode=0 Nov 21 20:32:30 crc kubenswrapper[4727]: I1121 20:32:30.023857 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" event={"ID":"88e243c4-68b1-4319-8736-a5c650da03a0","Type":"ContainerDied","Data":"316f211905dfdc2491fe1a5a00937170f7112ddaf737c9d1bccccff1ba43807e"} Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.603687 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.637674 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-inventory\") pod \"88e243c4-68b1-4319-8736-a5c650da03a0\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.637818 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-repo-setup-combined-ca-bundle\") pod \"88e243c4-68b1-4319-8736-a5c650da03a0\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.637845 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-ssh-key\") pod \"88e243c4-68b1-4319-8736-a5c650da03a0\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.637909 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlqfc\" (UniqueName: \"kubernetes.io/projected/88e243c4-68b1-4319-8736-a5c650da03a0-kube-api-access-rlqfc\") pod \"88e243c4-68b1-4319-8736-a5c650da03a0\" (UID: \"88e243c4-68b1-4319-8736-a5c650da03a0\") " Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.649623 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e243c4-68b1-4319-8736-a5c650da03a0-kube-api-access-rlqfc" (OuterVolumeSpecName: "kube-api-access-rlqfc") pod "88e243c4-68b1-4319-8736-a5c650da03a0" (UID: "88e243c4-68b1-4319-8736-a5c650da03a0"). InnerVolumeSpecName "kube-api-access-rlqfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.653517 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "88e243c4-68b1-4319-8736-a5c650da03a0" (UID: "88e243c4-68b1-4319-8736-a5c650da03a0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.684159 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-inventory" (OuterVolumeSpecName: "inventory") pod "88e243c4-68b1-4319-8736-a5c650da03a0" (UID: "88e243c4-68b1-4319-8736-a5c650da03a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.692266 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "88e243c4-68b1-4319-8736-a5c650da03a0" (UID: "88e243c4-68b1-4319-8736-a5c650da03a0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.741926 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.742022 4727 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.742037 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88e243c4-68b1-4319-8736-a5c650da03a0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:32:31 crc kubenswrapper[4727]: I1121 20:32:31.742052 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlqfc\" (UniqueName: \"kubernetes.io/projected/88e243c4-68b1-4319-8736-a5c650da03a0-kube-api-access-rlqfc\") on node \"crc\" DevicePath \"\"" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.052113 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" event={"ID":"88e243c4-68b1-4319-8736-a5c650da03a0","Type":"ContainerDied","Data":"daf3602943fd821fc93ea33c1aa0a7b5366c9a96d6c05f958b1a083c02999c77"} Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.052156 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf3602943fd821fc93ea33c1aa0a7b5366c9a96d6c05f958b1a083c02999c77" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.052228 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.237916 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7"] Nov 21 20:32:32 crc kubenswrapper[4727]: E1121 20:32:32.239004 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e243c4-68b1-4319-8736-a5c650da03a0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.239031 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e243c4-68b1-4319-8736-a5c650da03a0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.239370 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e243c4-68b1-4319-8736-a5c650da03a0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.240466 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.244622 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.244865 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.245039 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.245179 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.249209 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7"] Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.257367 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-85dv7\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.257494 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-85dv7\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.257599 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bcmq\" (UniqueName: \"kubernetes.io/projected/2f6befa2-b5d0-4390-885e-d332d9e51444-kube-api-access-6bcmq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-85dv7\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.358804 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-85dv7\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.358897 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bcmq\" (UniqueName: \"kubernetes.io/projected/2f6befa2-b5d0-4390-885e-d332d9e51444-kube-api-access-6bcmq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-85dv7\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.359023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-85dv7\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.366647 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-85dv7\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.374939 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-85dv7\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.378514 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bcmq\" (UniqueName: \"kubernetes.io/projected/2f6befa2-b5d0-4390-885e-d332d9e51444-kube-api-access-6bcmq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-85dv7\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:32 crc kubenswrapper[4727]: I1121 20:32:32.567492 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:33 crc kubenswrapper[4727]: I1121 20:32:33.110893 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7"] Nov 21 20:32:34 crc kubenswrapper[4727]: I1121 20:32:34.073855 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" event={"ID":"2f6befa2-b5d0-4390-885e-d332d9e51444","Type":"ContainerStarted","Data":"3efac2f45533f94be3397c9e41934ddf7d9d4d22693b9892e483a0577fc32129"} Nov 21 20:32:34 crc kubenswrapper[4727]: I1121 20:32:34.074312 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" event={"ID":"2f6befa2-b5d0-4390-885e-d332d9e51444","Type":"ContainerStarted","Data":"b1290cca8f41e48049e6921d8ab1f9342e87d25fcbfec4c3c5a709c6180e5a73"} Nov 21 20:32:34 crc kubenswrapper[4727]: I1121 20:32:34.101907 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" podStartSLOduration=1.7047840079999999 podStartE2EDuration="2.101884265s" podCreationTimestamp="2025-11-21 20:32:32 +0000 UTC" firstStartedPulling="2025-11-21 20:32:33.127129889 +0000 UTC m=+1558.313314943" lastFinishedPulling="2025-11-21 20:32:33.524230156 +0000 UTC m=+1558.710415200" observedRunningTime="2025-11-21 20:32:34.090435129 +0000 UTC m=+1559.276620173" watchObservedRunningTime="2025-11-21 20:32:34.101884265 +0000 UTC m=+1559.288069309" Nov 21 20:32:37 crc kubenswrapper[4727]: I1121 20:32:37.105052 4727 generic.go:334] "Generic (PLEG): container finished" podID="2f6befa2-b5d0-4390-885e-d332d9e51444" containerID="3efac2f45533f94be3397c9e41934ddf7d9d4d22693b9892e483a0577fc32129" exitCode=0 Nov 21 20:32:37 crc kubenswrapper[4727]: I1121 20:32:37.105147 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" event={"ID":"2f6befa2-b5d0-4390-885e-d332d9e51444","Type":"ContainerDied","Data":"3efac2f45533f94be3397c9e41934ddf7d9d4d22693b9892e483a0577fc32129"} Nov 21 20:32:38 crc kubenswrapper[4727]: I1121 20:32:38.593571 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:38 crc kubenswrapper[4727]: I1121 20:32:38.695841 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-inventory\") pod \"2f6befa2-b5d0-4390-885e-d332d9e51444\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " Nov 21 20:32:38 crc kubenswrapper[4727]: I1121 20:32:38.696028 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bcmq\" (UniqueName: \"kubernetes.io/projected/2f6befa2-b5d0-4390-885e-d332d9e51444-kube-api-access-6bcmq\") pod \"2f6befa2-b5d0-4390-885e-d332d9e51444\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " Nov 21 20:32:38 crc kubenswrapper[4727]: I1121 20:32:38.696222 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-ssh-key\") pod \"2f6befa2-b5d0-4390-885e-d332d9e51444\" (UID: \"2f6befa2-b5d0-4390-885e-d332d9e51444\") " Nov 21 20:32:38 crc kubenswrapper[4727]: I1121 20:32:38.710359 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6befa2-b5d0-4390-885e-d332d9e51444-kube-api-access-6bcmq" (OuterVolumeSpecName: "kube-api-access-6bcmq") pod "2f6befa2-b5d0-4390-885e-d332d9e51444" (UID: "2f6befa2-b5d0-4390-885e-d332d9e51444"). InnerVolumeSpecName "kube-api-access-6bcmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:32:38 crc kubenswrapper[4727]: I1121 20:32:38.730348 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-inventory" (OuterVolumeSpecName: "inventory") pod "2f6befa2-b5d0-4390-885e-d332d9e51444" (UID: "2f6befa2-b5d0-4390-885e-d332d9e51444"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:32:38 crc kubenswrapper[4727]: I1121 20:32:38.738518 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f6befa2-b5d0-4390-885e-d332d9e51444" (UID: "2f6befa2-b5d0-4390-885e-d332d9e51444"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:32:38 crc kubenswrapper[4727]: I1121 20:32:38.799995 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bcmq\" (UniqueName: \"kubernetes.io/projected/2f6befa2-b5d0-4390-885e-d332d9e51444-kube-api-access-6bcmq\") on node \"crc\" DevicePath \"\"" Nov 21 20:32:38 crc kubenswrapper[4727]: I1121 20:32:38.800028 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:32:38 crc kubenswrapper[4727]: I1121 20:32:38.800040 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f6befa2-b5d0-4390-885e-d332d9e51444-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.125852 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" event={"ID":"2f6befa2-b5d0-4390-885e-d332d9e51444","Type":"ContainerDied","Data":"b1290cca8f41e48049e6921d8ab1f9342e87d25fcbfec4c3c5a709c6180e5a73"} Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.125894 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1290cca8f41e48049e6921d8ab1f9342e87d25fcbfec4c3c5a709c6180e5a73" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.125948 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-85dv7" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.200566 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w"] Nov 21 20:32:39 crc kubenswrapper[4727]: E1121 20:32:39.201296 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6befa2-b5d0-4390-885e-d332d9e51444" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.201322 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6befa2-b5d0-4390-885e-d332d9e51444" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.201645 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6befa2-b5d0-4390-885e-d332d9e51444" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.202859 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.206602 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.207100 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.207266 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.207292 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.215073 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w"] Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.312696 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.312784 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.312905 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6k4b\" (UniqueName: \"kubernetes.io/projected/766225b4-86ef-4a28-9321-7efd20c20c8b-kube-api-access-z6k4b\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.313114 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.416416 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.416488 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.416541 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6k4b\" (UniqueName: \"kubernetes.io/projected/766225b4-86ef-4a28-9321-7efd20c20c8b-kube-api-access-z6k4b\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.416587 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.421999 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.422003 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.426017 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.436664 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6k4b\" (UniqueName: \"kubernetes.io/projected/766225b4-86ef-4a28-9321-7efd20c20c8b-kube-api-access-z6k4b\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:39 crc kubenswrapper[4727]: I1121 20:32:39.553904 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:32:40 crc kubenswrapper[4727]: I1121 20:32:40.113869 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w"] Nov 21 20:32:40 crc kubenswrapper[4727]: W1121 20:32:40.118230 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod766225b4_86ef_4a28_9321_7efd20c20c8b.slice/crio-74f9a08848b89108a20f9b65925e2ab45202c4b74f11508bc24aa9c29e7cf416 WatchSource:0}: Error finding container 74f9a08848b89108a20f9b65925e2ab45202c4b74f11508bc24aa9c29e7cf416: Status 404 returned error can't find the container with id 74f9a08848b89108a20f9b65925e2ab45202c4b74f11508bc24aa9c29e7cf416 Nov 21 20:32:40 crc kubenswrapper[4727]: I1121 20:32:40.140585 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" event={"ID":"766225b4-86ef-4a28-9321-7efd20c20c8b","Type":"ContainerStarted","Data":"74f9a08848b89108a20f9b65925e2ab45202c4b74f11508bc24aa9c29e7cf416"} Nov 21 20:32:41 crc kubenswrapper[4727]: I1121 20:32:41.167864 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" event={"ID":"766225b4-86ef-4a28-9321-7efd20c20c8b","Type":"ContainerStarted","Data":"2aea4edd740ce58bbbfaf19a76ec20824c7f6c7cb930a093715039fd6dc5e390"} Nov 21 20:32:41 crc kubenswrapper[4727]: I1121 20:32:41.185794 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" podStartSLOduration=1.746256291 podStartE2EDuration="2.185773856s" podCreationTimestamp="2025-11-21 20:32:39 +0000 UTC" firstStartedPulling="2025-11-21 20:32:40.124714181 +0000 UTC m=+1565.310899225" lastFinishedPulling="2025-11-21 20:32:40.564231746 +0000 UTC m=+1565.750416790" observedRunningTime="2025-11-21 20:32:41.184203349 +0000 UTC m=+1566.370388403" watchObservedRunningTime="2025-11-21 20:32:41.185773856 +0000 UTC m=+1566.371958910" Nov 21 20:32:43 crc kubenswrapper[4727]: I1121 20:32:43.334996 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:32:43 crc kubenswrapper[4727]: I1121 20:32:43.335305 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:32:43 crc kubenswrapper[4727]: I1121 20:32:43.755148 4727 scope.go:117] "RemoveContainer" containerID="ced7a2e5adf35a2a529855a6c1e52d4689bf1f1ac766d21277c6c904b49cba63" Nov 21 20:32:43 crc kubenswrapper[4727]: I1121 20:32:43.811498 4727 scope.go:117] "RemoveContainer" containerID="a0e891faefa4052093484717919ae2477863ffef4dadc9a5f5528f1b176d7336" Nov 21 20:32:43 crc kubenswrapper[4727]: I1121 20:32:43.843180 4727 scope.go:117] "RemoveContainer" containerID="fcd4b4ef52041fa2b8beb81fe4ae14fd4298a2d480a2569b33222fa2be49d19f" Nov 21 20:32:43 crc kubenswrapper[4727]: I1121 20:32:43.918759 4727 scope.go:117] "RemoveContainer" containerID="3dd102538622b59fccc3c3341d9b72fe2b4b98cdcccd4115279fc2475cefd719" Nov 21 20:33:13 crc kubenswrapper[4727]: I1121 20:33:13.335279 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:33:13 crc kubenswrapper[4727]: I1121 20:33:13.335761 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:33:13 crc kubenswrapper[4727]: I1121 20:33:13.335807 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:33:13 crc kubenswrapper[4727]: I1121 20:33:13.336708 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:33:13 crc kubenswrapper[4727]: I1121 20:33:13.336762 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" gracePeriod=600 Nov 21 20:33:13 crc kubenswrapper[4727]: E1121 20:33:13.461731 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:33:13 crc kubenswrapper[4727]: I1121 20:33:13.556938 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" exitCode=0 Nov 21 20:33:13 crc kubenswrapper[4727]: I1121 20:33:13.556986 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9"} Nov 21 20:33:13 crc kubenswrapper[4727]: I1121 20:33:13.557064 4727 scope.go:117] "RemoveContainer" containerID="5c81a93a448fa0edbcf26720ad6bed9f6ecbbf23562d69ff342656c8199e62de" Nov 21 20:33:13 crc kubenswrapper[4727]: I1121 20:33:13.558191 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:33:13 crc kubenswrapper[4727]: E1121 20:33:13.558632 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:33:25 crc kubenswrapper[4727]: I1121 20:33:25.525017 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:33:25 crc kubenswrapper[4727]: E1121 20:33:25.529868 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:33:38 crc kubenswrapper[4727]: I1121 20:33:38.499248 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:33:38 crc kubenswrapper[4727]: E1121 20:33:38.500086 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:33:44 crc kubenswrapper[4727]: I1121 20:33:44.074482 4727 scope.go:117] "RemoveContainer" containerID="ff0ceab44c29e15592946609ed1f77dce143d2f10a94426cbc2705b7928467fd" Nov 21 20:33:44 crc kubenswrapper[4727]: I1121 20:33:44.098814 4727 scope.go:117] "RemoveContainer" containerID="d3d01d2ec5717e02e15f98823e2a16ac99c22596096b22e12ed9863161c31561" Nov 21 20:33:44 crc kubenswrapper[4727]: I1121 20:33:44.119301 4727 scope.go:117] "RemoveContainer" containerID="4dc5b1e072ab77ae18e04b1f3acd8a7ff182ad15cddb4443e452392a74de4263" Nov 21 20:33:44 crc kubenswrapper[4727]: I1121 20:33:44.142451 4727 scope.go:117] "RemoveContainer" containerID="761e55b2dce91b5cc3d521b332f0606c97997c104aee2a7c65b79a43635b9ba7" Nov 21 20:33:49 crc kubenswrapper[4727]: I1121 20:33:49.499857 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:33:49 crc kubenswrapper[4727]: E1121 20:33:49.500678 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:34:04 crc kubenswrapper[4727]: I1121 20:34:04.499883 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:34:04 crc kubenswrapper[4727]: E1121 20:34:04.500794 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:34:17 crc kubenswrapper[4727]: I1121 20:34:17.500129 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:34:17 crc kubenswrapper[4727]: E1121 20:34:17.501360 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:34:29 crc kubenswrapper[4727]: I1121 20:34:29.500343 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:34:29 crc kubenswrapper[4727]: E1121 20:34:29.501597 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:34:42 crc kubenswrapper[4727]: I1121 20:34:42.501216 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:34:42 crc kubenswrapper[4727]: E1121 20:34:42.502006 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:34:54 crc kubenswrapper[4727]: I1121 20:34:54.500156 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:34:54 crc kubenswrapper[4727]: E1121 20:34:54.501481 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:35:05 crc kubenswrapper[4727]: I1121 20:35:05.507705 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:35:05 crc kubenswrapper[4727]: E1121 20:35:05.509416 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.045936 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nq5mm"] Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.049010 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.107943 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nq5mm"] Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.159754 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-catalog-content\") pod \"redhat-operators-nq5mm\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.160266 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-utilities\") pod \"redhat-operators-nq5mm\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.160522 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qznhb\" (UniqueName: \"kubernetes.io/projected/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-kube-api-access-qznhb\") pod \"redhat-operators-nq5mm\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.262600 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-utilities\") pod \"redhat-operators-nq5mm\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.262736 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qznhb\" (UniqueName: \"kubernetes.io/projected/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-kube-api-access-qznhb\") pod \"redhat-operators-nq5mm\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.262877 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-catalog-content\") pod \"redhat-operators-nq5mm\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.263195 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-utilities\") pod \"redhat-operators-nq5mm\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.263399 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-catalog-content\") pod \"redhat-operators-nq5mm\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.294057 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qznhb\" (UniqueName: \"kubernetes.io/projected/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-kube-api-access-qznhb\") pod \"redhat-operators-nq5mm\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.369365 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:19 crc kubenswrapper[4727]: I1121 20:35:19.839108 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nq5mm"] Nov 21 20:35:19 crc kubenswrapper[4727]: W1121 20:35:19.851177 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5687cd5c_fc6e_459e_9e7f_87167d4fb4ea.slice/crio-ddf8b1c39b7456bdeb8d01897468f57fd88fd3aaac7117d84ec9e2f653ffd966 WatchSource:0}: Error finding container ddf8b1c39b7456bdeb8d01897468f57fd88fd3aaac7117d84ec9e2f653ffd966: Status 404 returned error can't find the container with id ddf8b1c39b7456bdeb8d01897468f57fd88fd3aaac7117d84ec9e2f653ffd966 Nov 21 20:35:20 crc kubenswrapper[4727]: I1121 20:35:20.095705 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nq5mm" event={"ID":"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea","Type":"ContainerStarted","Data":"4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d"} Nov 21 20:35:20 crc kubenswrapper[4727]: I1121 20:35:20.096078 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nq5mm" event={"ID":"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea","Type":"ContainerStarted","Data":"ddf8b1c39b7456bdeb8d01897468f57fd88fd3aaac7117d84ec9e2f653ffd966"} Nov 21 20:35:20 crc kubenswrapper[4727]: I1121 20:35:20.501054 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:35:20 crc kubenswrapper[4727]: E1121 20:35:20.501503 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.107078 4727 generic.go:334] "Generic (PLEG): container finished" podID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerID="4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d" exitCode=0 Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.107131 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nq5mm" event={"ID":"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea","Type":"ContainerDied","Data":"4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d"} Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.453579 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zc2g7"] Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.456485 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.471644 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zc2g7"] Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.514587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-utilities\") pod \"certified-operators-zc2g7\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.514749 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-catalog-content\") pod \"certified-operators-zc2g7\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.514807 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt4nn\" (UniqueName: \"kubernetes.io/projected/f9e77093-b845-4dbc-928a-5b3de1e55b44-kube-api-access-zt4nn\") pod \"certified-operators-zc2g7\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.617489 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-catalog-content\") pod \"certified-operators-zc2g7\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.617587 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt4nn\" (UniqueName: \"kubernetes.io/projected/f9e77093-b845-4dbc-928a-5b3de1e55b44-kube-api-access-zt4nn\") pod \"certified-operators-zc2g7\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.617763 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-utilities\") pod \"certified-operators-zc2g7\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.618258 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-utilities\") pod \"certified-operators-zc2g7\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.618367 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-catalog-content\") pod \"certified-operators-zc2g7\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.640237 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt4nn\" (UniqueName: \"kubernetes.io/projected/f9e77093-b845-4dbc-928a-5b3de1e55b44-kube-api-access-zt4nn\") pod \"certified-operators-zc2g7\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:21 crc kubenswrapper[4727]: I1121 20:35:21.785055 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:22 crc kubenswrapper[4727]: W1121 20:35:22.445727 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9e77093_b845_4dbc_928a_5b3de1e55b44.slice/crio-f9740f67eac87163402f9dbdeb97200546d1f5d6e2c1b89f0a80d5aff3985a3b WatchSource:0}: Error finding container f9740f67eac87163402f9dbdeb97200546d1f5d6e2c1b89f0a80d5aff3985a3b: Status 404 returned error can't find the container with id f9740f67eac87163402f9dbdeb97200546d1f5d6e2c1b89f0a80d5aff3985a3b Nov 21 20:35:22 crc kubenswrapper[4727]: I1121 20:35:22.456436 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zc2g7"] Nov 21 20:35:23 crc kubenswrapper[4727]: I1121 20:35:23.135737 4727 generic.go:334] "Generic (PLEG): container finished" podID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerID="476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160" exitCode=0 Nov 21 20:35:23 crc kubenswrapper[4727]: I1121 20:35:23.135846 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc2g7" event={"ID":"f9e77093-b845-4dbc-928a-5b3de1e55b44","Type":"ContainerDied","Data":"476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160"} Nov 21 20:35:23 crc kubenswrapper[4727]: I1121 20:35:23.136079 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc2g7" event={"ID":"f9e77093-b845-4dbc-928a-5b3de1e55b44","Type":"ContainerStarted","Data":"f9740f67eac87163402f9dbdeb97200546d1f5d6e2c1b89f0a80d5aff3985a3b"} Nov 21 20:35:23 crc kubenswrapper[4727]: I1121 20:35:23.146274 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nq5mm" event={"ID":"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea","Type":"ContainerStarted","Data":"9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba"} Nov 21 20:35:25 crc kubenswrapper[4727]: I1121 20:35:25.170754 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc2g7" event={"ID":"f9e77093-b845-4dbc-928a-5b3de1e55b44","Type":"ContainerStarted","Data":"88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9"} Nov 21 20:35:28 crc kubenswrapper[4727]: I1121 20:35:28.209989 4727 generic.go:334] "Generic (PLEG): container finished" podID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerID="88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9" exitCode=0 Nov 21 20:35:28 crc kubenswrapper[4727]: I1121 20:35:28.210125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc2g7" event={"ID":"f9e77093-b845-4dbc-928a-5b3de1e55b44","Type":"ContainerDied","Data":"88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9"} Nov 21 20:35:29 crc kubenswrapper[4727]: I1121 20:35:29.224370 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc2g7" event={"ID":"f9e77093-b845-4dbc-928a-5b3de1e55b44","Type":"ContainerStarted","Data":"839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06"} Nov 21 20:35:29 crc kubenswrapper[4727]: I1121 20:35:29.228267 4727 generic.go:334] "Generic (PLEG): container finished" podID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerID="9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba" exitCode=0 Nov 21 20:35:29 crc kubenswrapper[4727]: I1121 20:35:29.228325 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nq5mm" event={"ID":"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea","Type":"ContainerDied","Data":"9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba"} Nov 21 20:35:29 crc kubenswrapper[4727]: I1121 20:35:29.278733 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zc2g7" podStartSLOduration=2.815162774 podStartE2EDuration="8.27870496s" podCreationTimestamp="2025-11-21 20:35:21 +0000 UTC" firstStartedPulling="2025-11-21 20:35:23.164578564 +0000 UTC m=+1728.350763608" lastFinishedPulling="2025-11-21 20:35:28.62812075 +0000 UTC m=+1733.814305794" observedRunningTime="2025-11-21 20:35:29.251259551 +0000 UTC m=+1734.437444595" watchObservedRunningTime="2025-11-21 20:35:29.27870496 +0000 UTC m=+1734.464890014" Nov 21 20:35:30 crc kubenswrapper[4727]: I1121 20:35:30.240646 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nq5mm" event={"ID":"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea","Type":"ContainerStarted","Data":"1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63"} Nov 21 20:35:30 crc kubenswrapper[4727]: I1121 20:35:30.258211 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nq5mm" podStartSLOduration=2.765733951 podStartE2EDuration="11.258188394s" podCreationTimestamp="2025-11-21 20:35:19 +0000 UTC" firstStartedPulling="2025-11-21 20:35:21.109055054 +0000 UTC m=+1726.295240098" lastFinishedPulling="2025-11-21 20:35:29.601509497 +0000 UTC m=+1734.787694541" observedRunningTime="2025-11-21 20:35:30.254502505 +0000 UTC m=+1735.440687549" watchObservedRunningTime="2025-11-21 20:35:30.258188394 +0000 UTC m=+1735.444373438" Nov 21 20:35:31 crc kubenswrapper[4727]: I1121 20:35:31.786785 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:31 crc kubenswrapper[4727]: I1121 20:35:31.787143 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:32 crc kubenswrapper[4727]: I1121 20:35:32.840106 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zc2g7" podUID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerName="registry-server" probeResult="failure" output=< Nov 21 20:35:32 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 20:35:32 crc kubenswrapper[4727]: > Nov 21 20:35:34 crc kubenswrapper[4727]: I1121 20:35:34.284315 4727 generic.go:334] "Generic (PLEG): container finished" podID="766225b4-86ef-4a28-9321-7efd20c20c8b" containerID="2aea4edd740ce58bbbfaf19a76ec20824c7f6c7cb930a093715039fd6dc5e390" exitCode=0 Nov 21 20:35:34 crc kubenswrapper[4727]: I1121 20:35:34.284387 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" event={"ID":"766225b4-86ef-4a28-9321-7efd20c20c8b","Type":"ContainerDied","Data":"2aea4edd740ce58bbbfaf19a76ec20824c7f6c7cb930a093715039fd6dc5e390"} Nov 21 20:35:34 crc kubenswrapper[4727]: I1121 20:35:34.499743 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:35:34 crc kubenswrapper[4727]: E1121 20:35:34.500149 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.838979 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.874779 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-bootstrap-combined-ca-bundle\") pod \"766225b4-86ef-4a28-9321-7efd20c20c8b\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.875268 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-inventory\") pod \"766225b4-86ef-4a28-9321-7efd20c20c8b\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.875294 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-ssh-key\") pod \"766225b4-86ef-4a28-9321-7efd20c20c8b\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.875342 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6k4b\" (UniqueName: \"kubernetes.io/projected/766225b4-86ef-4a28-9321-7efd20c20c8b-kube-api-access-z6k4b\") pod \"766225b4-86ef-4a28-9321-7efd20c20c8b\" (UID: \"766225b4-86ef-4a28-9321-7efd20c20c8b\") " Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.881357 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766225b4-86ef-4a28-9321-7efd20c20c8b-kube-api-access-z6k4b" (OuterVolumeSpecName: "kube-api-access-z6k4b") pod "766225b4-86ef-4a28-9321-7efd20c20c8b" (UID: "766225b4-86ef-4a28-9321-7efd20c20c8b"). InnerVolumeSpecName "kube-api-access-z6k4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.905851 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "766225b4-86ef-4a28-9321-7efd20c20c8b" (UID: "766225b4-86ef-4a28-9321-7efd20c20c8b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.917997 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-inventory" (OuterVolumeSpecName: "inventory") pod "766225b4-86ef-4a28-9321-7efd20c20c8b" (UID: "766225b4-86ef-4a28-9321-7efd20c20c8b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.926659 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "766225b4-86ef-4a28-9321-7efd20c20c8b" (UID: "766225b4-86ef-4a28-9321-7efd20c20c8b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.977900 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.977936 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.977948 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6k4b\" (UniqueName: \"kubernetes.io/projected/766225b4-86ef-4a28-9321-7efd20c20c8b-kube-api-access-z6k4b\") on node \"crc\" DevicePath \"\"" Nov 21 20:35:35 crc kubenswrapper[4727]: I1121 20:35:35.977983 4727 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766225b4-86ef-4a28-9321-7efd20c20c8b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.305436 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" event={"ID":"766225b4-86ef-4a28-9321-7efd20c20c8b","Type":"ContainerDied","Data":"74f9a08848b89108a20f9b65925e2ab45202c4b74f11508bc24aa9c29e7cf416"} Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.305484 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.305489 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74f9a08848b89108a20f9b65925e2ab45202c4b74f11508bc24aa9c29e7cf416" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.409231 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2"] Nov 21 20:35:36 crc kubenswrapper[4727]: E1121 20:35:36.409864 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766225b4-86ef-4a28-9321-7efd20c20c8b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.409888 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="766225b4-86ef-4a28-9321-7efd20c20c8b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.410181 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="766225b4-86ef-4a28-9321-7efd20c20c8b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.411084 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.413359 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.415007 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.415026 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.415028 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.432674 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2"] Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.487809 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.487935 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t25m\" (UniqueName: \"kubernetes.io/projected/fe06ea2f-15b5-409a-93b8-6c40a629c029-kube-api-access-9t25m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.488054 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.590523 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.591452 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t25m\" (UniqueName: \"kubernetes.io/projected/fe06ea2f-15b5-409a-93b8-6c40a629c029-kube-api-access-9t25m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.591625 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.596547 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.597386 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.613182 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t25m\" (UniqueName: \"kubernetes.io/projected/fe06ea2f-15b5-409a-93b8-6c40a629c029-kube-api-access-9t25m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:36 crc kubenswrapper[4727]: I1121 20:35:36.726980 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:35:37 crc kubenswrapper[4727]: W1121 20:35:37.310035 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe06ea2f_15b5_409a_93b8_6c40a629c029.slice/crio-f9857d7d8df959fb461d5f0abfea5bede5770a1fdc8aaad7250c268e35b214fd WatchSource:0}: Error finding container f9857d7d8df959fb461d5f0abfea5bede5770a1fdc8aaad7250c268e35b214fd: Status 404 returned error can't find the container with id f9857d7d8df959fb461d5f0abfea5bede5770a1fdc8aaad7250c268e35b214fd Nov 21 20:35:37 crc kubenswrapper[4727]: I1121 20:35:37.316738 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2"] Nov 21 20:35:38 crc kubenswrapper[4727]: I1121 20:35:38.329335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" event={"ID":"fe06ea2f-15b5-409a-93b8-6c40a629c029","Type":"ContainerStarted","Data":"81c5f636d58047a24be89a6d23d787b4c59b0ccbbdf7913fc2b8f3a6e6f609e8"} Nov 21 20:35:38 crc kubenswrapper[4727]: I1121 20:35:38.329915 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" event={"ID":"fe06ea2f-15b5-409a-93b8-6c40a629c029","Type":"ContainerStarted","Data":"f9857d7d8df959fb461d5f0abfea5bede5770a1fdc8aaad7250c268e35b214fd"} Nov 21 20:35:38 crc kubenswrapper[4727]: I1121 20:35:38.352209 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" podStartSLOduration=1.9215741130000001 podStartE2EDuration="2.352192062s" podCreationTimestamp="2025-11-21 20:35:36 +0000 UTC" firstStartedPulling="2025-11-21 20:35:37.313251978 +0000 UTC m=+1742.499437022" lastFinishedPulling="2025-11-21 20:35:37.743869927 +0000 UTC m=+1742.930054971" observedRunningTime="2025-11-21 20:35:38.34338667 +0000 UTC m=+1743.529571714" watchObservedRunningTime="2025-11-21 20:35:38.352192062 +0000 UTC m=+1743.538377106" Nov 21 20:35:39 crc kubenswrapper[4727]: I1121 20:35:39.369515 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:39 crc kubenswrapper[4727]: I1121 20:35:39.369813 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:40 crc kubenswrapper[4727]: I1121 20:35:40.424928 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nq5mm" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerName="registry-server" probeResult="failure" output=< Nov 21 20:35:40 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 20:35:40 crc kubenswrapper[4727]: > Nov 21 20:35:41 crc kubenswrapper[4727]: I1121 20:35:41.838970 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:41 crc kubenswrapper[4727]: I1121 20:35:41.913603 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:42 crc kubenswrapper[4727]: I1121 20:35:42.080856 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zc2g7"] Nov 21 20:35:43 crc kubenswrapper[4727]: I1121 20:35:43.378097 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zc2g7" podUID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerName="registry-server" containerID="cri-o://839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06" gracePeriod=2 Nov 21 20:35:43 crc kubenswrapper[4727]: I1121 20:35:43.891402 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:43 crc kubenswrapper[4727]: I1121 20:35:43.973199 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-catalog-content\") pod \"f9e77093-b845-4dbc-928a-5b3de1e55b44\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " Nov 21 20:35:43 crc kubenswrapper[4727]: I1121 20:35:43.973308 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-utilities\") pod \"f9e77093-b845-4dbc-928a-5b3de1e55b44\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " Nov 21 20:35:43 crc kubenswrapper[4727]: I1121 20:35:43.973491 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt4nn\" (UniqueName: \"kubernetes.io/projected/f9e77093-b845-4dbc-928a-5b3de1e55b44-kube-api-access-zt4nn\") pod \"f9e77093-b845-4dbc-928a-5b3de1e55b44\" (UID: \"f9e77093-b845-4dbc-928a-5b3de1e55b44\") " Nov 21 20:35:43 crc kubenswrapper[4727]: I1121 20:35:43.978804 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-utilities" (OuterVolumeSpecName: "utilities") pod "f9e77093-b845-4dbc-928a-5b3de1e55b44" (UID: "f9e77093-b845-4dbc-928a-5b3de1e55b44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:35:43 crc kubenswrapper[4727]: I1121 20:35:43.981712 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e77093-b845-4dbc-928a-5b3de1e55b44-kube-api-access-zt4nn" (OuterVolumeSpecName: "kube-api-access-zt4nn") pod "f9e77093-b845-4dbc-928a-5b3de1e55b44" (UID: "f9e77093-b845-4dbc-928a-5b3de1e55b44"). InnerVolumeSpecName "kube-api-access-zt4nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.044138 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9e77093-b845-4dbc-928a-5b3de1e55b44" (UID: "f9e77093-b845-4dbc-928a-5b3de1e55b44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.075231 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-c5eb-account-create-94gh4"] Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.076743 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.076777 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e77093-b845-4dbc-928a-5b3de1e55b44-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.076786 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt4nn\" (UniqueName: \"kubernetes.io/projected/f9e77093-b845-4dbc-928a-5b3de1e55b44-kube-api-access-zt4nn\") on node \"crc\" DevicePath \"\"" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.087411 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-drvbw"] Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.097589 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-c5eb-account-create-94gh4"] Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.109755 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-drvbw"] Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.234806 4727 scope.go:117] "RemoveContainer" containerID="2dad4f4cbd53eece37ff51f1ae81443e4b8ba61708868ffb93db64381101d367" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.262246 4727 scope.go:117] "RemoveContainer" containerID="2391e6863860ef5a057a1bd8c6c2c805fbe353b2961b826a8746777ba4cca569" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.390504 4727 generic.go:334] "Generic (PLEG): container finished" podID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerID="839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06" exitCode=0 Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.390542 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc2g7" event={"ID":"f9e77093-b845-4dbc-928a-5b3de1e55b44","Type":"ContainerDied","Data":"839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06"} Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.390571 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zc2g7" event={"ID":"f9e77093-b845-4dbc-928a-5b3de1e55b44","Type":"ContainerDied","Data":"f9740f67eac87163402f9dbdeb97200546d1f5d6e2c1b89f0a80d5aff3985a3b"} Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.390596 4727 scope.go:117] "RemoveContainer" containerID="839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.390618 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zc2g7" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.427328 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zc2g7"] Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.432863 4727 scope.go:117] "RemoveContainer" containerID="88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.436887 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zc2g7"] Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.466707 4727 scope.go:117] "RemoveContainer" containerID="476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.519651 4727 scope.go:117] "RemoveContainer" containerID="839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06" Nov 21 20:35:44 crc kubenswrapper[4727]: E1121 20:35:44.520738 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06\": container with ID starting with 839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06 not found: ID does not exist" containerID="839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.520843 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06"} err="failed to get container status \"839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06\": rpc error: code = NotFound desc = could not find container \"839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06\": container with ID starting with 839385c220624a29ae0a659e5dda25cc01c41b4b31e4dab21aa053e24bf00c06 not found: ID does not exist" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.520882 4727 scope.go:117] "RemoveContainer" containerID="88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9" Nov 21 20:35:44 crc kubenswrapper[4727]: E1121 20:35:44.522399 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9\": container with ID starting with 88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9 not found: ID does not exist" containerID="88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.522432 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9"} err="failed to get container status \"88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9\": rpc error: code = NotFound desc = could not find container \"88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9\": container with ID starting with 88c128aff16c22fa52678a60938756b429a799fb44058d685fd794f35cad4bf9 not found: ID does not exist" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.522470 4727 scope.go:117] "RemoveContainer" containerID="476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160" Nov 21 20:35:44 crc kubenswrapper[4727]: E1121 20:35:44.522776 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160\": container with ID starting with 476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160 not found: ID does not exist" containerID="476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160" Nov 21 20:35:44 crc kubenswrapper[4727]: I1121 20:35:44.522817 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160"} err="failed to get container status \"476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160\": rpc error: code = NotFound desc = could not find container \"476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160\": container with ID starting with 476d5d3b888368a004c096b3e19901e810c4381fe057378bbb90b2f3b57ec160 not found: ID does not exist" Nov 21 20:35:45 crc kubenswrapper[4727]: I1121 20:35:45.514603 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce9ffea-4675-4ed2-9b15-8a584708e173" path="/var/lib/kubelet/pods/2ce9ffea-4675-4ed2-9b15-8a584708e173/volumes" Nov 21 20:35:45 crc kubenswrapper[4727]: I1121 20:35:45.517030 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c9240ef-7d44-4563-a271-6a540b902f9b" path="/var/lib/kubelet/pods/7c9240ef-7d44-4563-a271-6a540b902f9b/volumes" Nov 21 20:35:45 crc kubenswrapper[4727]: I1121 20:35:45.517773 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e77093-b845-4dbc-928a-5b3de1e55b44" path="/var/lib/kubelet/pods/f9e77093-b845-4dbc-928a-5b3de1e55b44/volumes" Nov 21 20:35:47 crc kubenswrapper[4727]: I1121 20:35:47.500011 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:35:47 crc kubenswrapper[4727]: E1121 20:35:47.501003 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:35:50 crc kubenswrapper[4727]: I1121 20:35:50.036403 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-spj24"] Nov 21 20:35:50 crc kubenswrapper[4727]: I1121 20:35:50.047395 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-spj24"] Nov 21 20:35:50 crc kubenswrapper[4727]: I1121 20:35:50.449587 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nq5mm" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerName="registry-server" probeResult="failure" output=< Nov 21 20:35:50 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 20:35:50 crc kubenswrapper[4727]: > Nov 21 20:35:51 crc kubenswrapper[4727]: I1121 20:35:51.029246 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8ec8-account-create-26qv7"] Nov 21 20:35:51 crc kubenswrapper[4727]: I1121 20:35:51.039745 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8ec8-account-create-26qv7"] Nov 21 20:35:51 crc kubenswrapper[4727]: I1121 20:35:51.515494 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79b69708-cf69-43e5-9299-bbe3fb5b72f4" path="/var/lib/kubelet/pods/79b69708-cf69-43e5-9299-bbe3fb5b72f4/volumes" Nov 21 20:35:51 crc kubenswrapper[4727]: I1121 20:35:51.518524 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be3adfa2-622c-432c-b9e7-7530926b2ec4" path="/var/lib/kubelet/pods/be3adfa2-622c-432c-b9e7-7530926b2ec4/volumes" Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.043054 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8b1a-account-create-gf7rm"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.053662 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-8738-account-create-8br5t"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.064436 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8b1a-account-create-gf7rm"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.076794 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-cm9v5"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.087364 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-8738-account-create-8br5t"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.098040 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-cm9v5"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.107755 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fbkmd"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.117917 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ec7d-account-create-lzk75"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.127239 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fbkmd"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.138776 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ec7d-account-create-lzk75"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.149039 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7"] Nov 21 20:35:54 crc kubenswrapper[4727]: I1121 20:35:54.158187 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q7bm7"] Nov 21 20:35:55 crc kubenswrapper[4727]: I1121 20:35:55.516516 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ed8782-7884-43ef-98dc-7889dc0c4429" path="/var/lib/kubelet/pods/12ed8782-7884-43ef-98dc-7889dc0c4429/volumes" Nov 21 20:35:55 crc kubenswrapper[4727]: I1121 20:35:55.518066 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44911d7b-3990-408a-9143-c4735c2e2b0b" path="/var/lib/kubelet/pods/44911d7b-3990-408a-9143-c4735c2e2b0b/volumes" Nov 21 20:35:55 crc kubenswrapper[4727]: I1121 20:35:55.519519 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74333548-19eb-4237-8b46-2fb41cc613ae" path="/var/lib/kubelet/pods/74333548-19eb-4237-8b46-2fb41cc613ae/volumes" Nov 21 20:35:55 crc kubenswrapper[4727]: I1121 20:35:55.520493 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810aff49-a935-4a69-a3ad-2c26ab66ead0" path="/var/lib/kubelet/pods/810aff49-a935-4a69-a3ad-2c26ab66ead0/volumes" Nov 21 20:35:55 crc kubenswrapper[4727]: I1121 20:35:55.522051 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3aca9be-2c44-48aa-b561-ad796cee0014" path="/var/lib/kubelet/pods/a3aca9be-2c44-48aa-b561-ad796cee0014/volumes" Nov 21 20:35:55 crc kubenswrapper[4727]: I1121 20:35:55.523646 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0343619-7950-468b-8f82-146e68397de5" path="/var/lib/kubelet/pods/d0343619-7950-468b-8f82-146e68397de5/volumes" Nov 21 20:35:58 crc kubenswrapper[4727]: I1121 20:35:58.499144 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:35:58 crc kubenswrapper[4727]: E1121 20:35:58.499869 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:35:59 crc kubenswrapper[4727]: I1121 20:35:59.433084 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:59 crc kubenswrapper[4727]: I1121 20:35:59.511920 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:35:59 crc kubenswrapper[4727]: I1121 20:35:59.677613 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nq5mm"] Nov 21 20:36:00 crc kubenswrapper[4727]: I1121 20:36:00.553411 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nq5mm" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerName="registry-server" containerID="cri-o://1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63" gracePeriod=2 Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.096671 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.179233 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qznhb\" (UniqueName: \"kubernetes.io/projected/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-kube-api-access-qznhb\") pod \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.179495 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-utilities\") pod \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.179687 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-catalog-content\") pod \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\" (UID: \"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea\") " Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.180586 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-utilities" (OuterVolumeSpecName: "utilities") pod "5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" (UID: "5687cd5c-fc6e-459e-9e7f-87167d4fb4ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.187518 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-kube-api-access-qznhb" (OuterVolumeSpecName: "kube-api-access-qznhb") pod "5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" (UID: "5687cd5c-fc6e-459e-9e7f-87167d4fb4ea"). InnerVolumeSpecName "kube-api-access-qznhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.258611 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" (UID: "5687cd5c-fc6e-459e-9e7f-87167d4fb4ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.282753 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.282784 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.282800 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qznhb\" (UniqueName: \"kubernetes.io/projected/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea-kube-api-access-qznhb\") on node \"crc\" DevicePath \"\"" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.565536 4727 generic.go:334] "Generic (PLEG): container finished" podID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerID="1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63" exitCode=0 Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.565595 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nq5mm" event={"ID":"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea","Type":"ContainerDied","Data":"1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63"} Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.565627 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nq5mm" event={"ID":"5687cd5c-fc6e-459e-9e7f-87167d4fb4ea","Type":"ContainerDied","Data":"ddf8b1c39b7456bdeb8d01897468f57fd88fd3aaac7117d84ec9e2f653ffd966"} Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.565649 4727 scope.go:117] "RemoveContainer" containerID="1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.565663 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nq5mm" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.590171 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nq5mm"] Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.599283 4727 scope.go:117] "RemoveContainer" containerID="9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.600033 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nq5mm"] Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.631926 4727 scope.go:117] "RemoveContainer" containerID="4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.678899 4727 scope.go:117] "RemoveContainer" containerID="1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63" Nov 21 20:36:01 crc kubenswrapper[4727]: E1121 20:36:01.679278 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63\": container with ID starting with 1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63 not found: ID does not exist" containerID="1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.679340 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63"} err="failed to get container status \"1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63\": rpc error: code = NotFound desc = could not find container \"1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63\": container with ID starting with 1e66ca744487ca2a66a6d1d381db489375c0e9d34a91952d94cbcb9b080aad63 not found: ID does not exist" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.679368 4727 scope.go:117] "RemoveContainer" containerID="9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba" Nov 21 20:36:01 crc kubenswrapper[4727]: E1121 20:36:01.679787 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba\": container with ID starting with 9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba not found: ID does not exist" containerID="9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.679819 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba"} err="failed to get container status \"9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba\": rpc error: code = NotFound desc = could not find container \"9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba\": container with ID starting with 9621328dba3d440405a92244d785154847a1127cee186f8f79f760028c1bcbba not found: ID does not exist" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.679843 4727 scope.go:117] "RemoveContainer" containerID="4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d" Nov 21 20:36:01 crc kubenswrapper[4727]: E1121 20:36:01.680113 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d\": container with ID starting with 4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d not found: ID does not exist" containerID="4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d" Nov 21 20:36:01 crc kubenswrapper[4727]: I1121 20:36:01.680144 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d"} err="failed to get container status \"4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d\": rpc error: code = NotFound desc = could not find container \"4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d\": container with ID starting with 4984641d8329e7c5007309a1b834931b2229da61d232bf8ad055657d04673b0d not found: ID does not exist" Nov 21 20:36:03 crc kubenswrapper[4727]: I1121 20:36:03.514996 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" path="/var/lib/kubelet/pods/5687cd5c-fc6e-459e-9e7f-87167d4fb4ea/volumes" Nov 21 20:36:11 crc kubenswrapper[4727]: I1121 20:36:11.499431 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:36:11 crc kubenswrapper[4727]: E1121 20:36:11.500351 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.046516 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-kwcbg"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.059130 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8d3c-account-create-zgpsn"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.071674 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-kwcbg"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.082705 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cwqhb"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.092183 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-xsqp9"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.101479 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c2d3-account-create-dc5n4"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.110438 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8d3c-account-create-zgpsn"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.119350 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-d6mq9"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.131384 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-xsqp9"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.140508 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c2d3-account-create-dc5n4"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.149331 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-d6mq9"] Nov 21 20:36:20 crc kubenswrapper[4727]: I1121 20:36:20.158407 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cwqhb"] Nov 21 20:36:21 crc kubenswrapper[4727]: I1121 20:36:21.514467 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ba3e0c-281c-4d28-87ca-076a68130e0d" path="/var/lib/kubelet/pods/05ba3e0c-281c-4d28-87ca-076a68130e0d/volumes" Nov 21 20:36:21 crc kubenswrapper[4727]: I1121 20:36:21.516423 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ad39e2-f848-4b8c-8493-b9a268c6ee5e" path="/var/lib/kubelet/pods/53ad39e2-f848-4b8c-8493-b9a268c6ee5e/volumes" Nov 21 20:36:21 crc kubenswrapper[4727]: I1121 20:36:21.518173 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7566d068-bef7-4b58-8460-1e259bd2dd94" path="/var/lib/kubelet/pods/7566d068-bef7-4b58-8460-1e259bd2dd94/volumes" Nov 21 20:36:21 crc kubenswrapper[4727]: I1121 20:36:21.520012 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f1bf400-e74d-4100-b45b-3586af918b21" path="/var/lib/kubelet/pods/9f1bf400-e74d-4100-b45b-3586af918b21/volumes" Nov 21 20:36:21 crc kubenswrapper[4727]: I1121 20:36:21.522224 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e530a2-cd2a-484f-87b8-4e4ef966b4ef" path="/var/lib/kubelet/pods/a9e530a2-cd2a-484f-87b8-4e4ef966b4ef/volumes" Nov 21 20:36:21 crc kubenswrapper[4727]: I1121 20:36:21.522889 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07fdc4d-df34-4241-8682-04203343eb9c" path="/var/lib/kubelet/pods/d07fdc4d-df34-4241-8682-04203343eb9c/volumes" Nov 21 20:36:24 crc kubenswrapper[4727]: I1121 20:36:24.042828 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3bfd-account-create-zvmnh"] Nov 21 20:36:24 crc kubenswrapper[4727]: I1121 20:36:24.056411 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rpv9s"] Nov 21 20:36:24 crc kubenswrapper[4727]: I1121 20:36:24.067173 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0c83-account-create-6dpfx"] Nov 21 20:36:24 crc kubenswrapper[4727]: I1121 20:36:24.076005 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0c83-account-create-6dpfx"] Nov 21 20:36:24 crc kubenswrapper[4727]: I1121 20:36:24.085223 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3bfd-account-create-zvmnh"] Nov 21 20:36:24 crc kubenswrapper[4727]: I1121 20:36:24.096044 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rpv9s"] Nov 21 20:36:25 crc kubenswrapper[4727]: I1121 20:36:25.677912 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b163ff-e145-4bac-b538-2db537ee665c" path="/var/lib/kubelet/pods/69b163ff-e145-4bac-b538-2db537ee665c/volumes" Nov 21 20:36:25 crc kubenswrapper[4727]: I1121 20:36:25.679080 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd484d31-a5ee-44bc-8d30-fdc1819e0956" path="/var/lib/kubelet/pods/bd484d31-a5ee-44bc-8d30-fdc1819e0956/volumes" Nov 21 20:36:25 crc kubenswrapper[4727]: I1121 20:36:25.680193 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0672730-e181-488f-8472-a20c75dcb285" path="/var/lib/kubelet/pods/e0672730-e181-488f-8472-a20c75dcb285/volumes" Nov 21 20:36:26 crc kubenswrapper[4727]: I1121 20:36:26.499164 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:36:26 crc kubenswrapper[4727]: E1121 20:36:26.499829 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:36:31 crc kubenswrapper[4727]: I1121 20:36:31.029630 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6fn75"] Nov 21 20:36:31 crc kubenswrapper[4727]: I1121 20:36:31.040624 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6fn75"] Nov 21 20:36:31 crc kubenswrapper[4727]: I1121 20:36:31.511048 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7" path="/var/lib/kubelet/pods/f1cdc530-12b5-4bc8-9ced-8aa3f2f6b5b7/volumes" Nov 21 20:36:39 crc kubenswrapper[4727]: I1121 20:36:39.499625 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:36:39 crc kubenswrapper[4727]: E1121 20:36:39.500476 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.320397 4727 scope.go:117] "RemoveContainer" containerID="6c090a3bf269c18126a4d238684515c66ea87e07d08f2df7d5485ee1bf6fc342" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.353436 4727 scope.go:117] "RemoveContainer" containerID="edf7f97ab1e8a2cddfd5ac7628a6e2e9efe4426905d83dc27a199b00faf65bbf" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.409058 4727 scope.go:117] "RemoveContainer" containerID="25deea902644530b5a73b1611f26e6eb80405b74f609796d0f90194c68ab45db" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.467399 4727 scope.go:117] "RemoveContainer" containerID="5c8c3dc4d59053173950e7b38bab1c8e5215124d824584e09975bf28437d8869" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.522033 4727 scope.go:117] "RemoveContainer" containerID="ae120e2c0262958c2447f538e8531453f8eed10f0b30166d9de72ca2c124f55c" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.569718 4727 scope.go:117] "RemoveContainer" containerID="9d7b2a8ca81fc996f4901b82d529ac48b5ee85a84aac5d2abdb470d24e62b061" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.624426 4727 scope.go:117] "RemoveContainer" containerID="047a3507d50b64b70668af59fd85f0862ca6994c638c99feb1f7428d2a7a65cf" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.646279 4727 scope.go:117] "RemoveContainer" containerID="32fede56cc063c09059cc4dc687d122db2760c62019b2b140477fb00117876a6" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.671164 4727 scope.go:117] "RemoveContainer" containerID="8bab53af9863844650046c425bae3f1c68b825ac097ea985ecb8cb85f507526b" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.695688 4727 scope.go:117] "RemoveContainer" containerID="346afd2a27e8cc2bf3b553ed228627ae9af773e2eb3fabdff002aa269cb95aff" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.716477 4727 scope.go:117] "RemoveContainer" containerID="775d52e0a511a5ef23fd1aa36b47eecd02b260c0cabcde4c0e6718efb156784d" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.740472 4727 scope.go:117] "RemoveContainer" containerID="b9851a880932ebbd7635cec268176d895b5a617d4a2a5fb811671b4c5a686eac" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.764937 4727 scope.go:117] "RemoveContainer" containerID="0d60ffd4baadea167764a24b3ff98ce7c1827fa5bd2ffcc0e9fa20fbffaa6a57" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.788663 4727 scope.go:117] "RemoveContainer" containerID="e3fcd2d247363cd1f71bbfbf409c9fe070c88cee775fb3b0592b4f97bbe0180f" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.816452 4727 scope.go:117] "RemoveContainer" containerID="6a2900f22baa23f3ffecb622d0c79aa7628f8931967192960f4bfbed77378dce" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.846214 4727 scope.go:117] "RemoveContainer" containerID="5b39c31d65eca4802f74d79f0758849ca806f2ae4dd64e00ff1bec14fdeedf2e" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.875304 4727 scope.go:117] "RemoveContainer" containerID="757dc795dea2449240dde0c8a170fce2c2b4c8abe40bde11b75791208f31fe02" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.899356 4727 scope.go:117] "RemoveContainer" containerID="79faaabeadf7ac09ea73c9e84dcaed2da66b92e28754d85d566fd0ae1d59ff90" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.926083 4727 scope.go:117] "RemoveContainer" containerID="2714fc1dd88715afa9668fd952f36430b4de018be177301aa6e994633755df1d" Nov 21 20:36:44 crc kubenswrapper[4727]: I1121 20:36:44.952724 4727 scope.go:117] "RemoveContainer" containerID="01e0394c13c7cebef5eb7403e7c0ebfae2acea8d43e1b9af2fb5d19f89a76801" Nov 21 20:36:51 crc kubenswrapper[4727]: I1121 20:36:51.500281 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:36:51 crc kubenswrapper[4727]: E1121 20:36:51.501349 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:37:04 crc kubenswrapper[4727]: I1121 20:37:04.499654 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:37:04 crc kubenswrapper[4727]: E1121 20:37:04.500354 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:37:06 crc kubenswrapper[4727]: I1121 20:37:06.051291 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2t297"] Nov 21 20:37:06 crc kubenswrapper[4727]: I1121 20:37:06.064183 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2t297"] Nov 21 20:37:07 crc kubenswrapper[4727]: I1121 20:37:07.513095 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b0e3c92-f23c-4257-856c-3bb4496913e2" path="/var/lib/kubelet/pods/1b0e3c92-f23c-4257-856c-3bb4496913e2/volumes" Nov 21 20:37:17 crc kubenswrapper[4727]: I1121 20:37:17.531262 4727 generic.go:334] "Generic (PLEG): container finished" podID="fe06ea2f-15b5-409a-93b8-6c40a629c029" containerID="81c5f636d58047a24be89a6d23d787b4c59b0ccbbdf7913fc2b8f3a6e6f609e8" exitCode=0 Nov 21 20:37:17 crc kubenswrapper[4727]: I1121 20:37:17.531300 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" event={"ID":"fe06ea2f-15b5-409a-93b8-6c40a629c029","Type":"ContainerDied","Data":"81c5f636d58047a24be89a6d23d787b4c59b0ccbbdf7913fc2b8f3a6e6f609e8"} Nov 21 20:37:18 crc kubenswrapper[4727]: I1121 20:37:18.499201 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:37:18 crc kubenswrapper[4727]: E1121 20:37:18.499797 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.031675 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.179237 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-inventory\") pod \"fe06ea2f-15b5-409a-93b8-6c40a629c029\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.179822 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t25m\" (UniqueName: \"kubernetes.io/projected/fe06ea2f-15b5-409a-93b8-6c40a629c029-kube-api-access-9t25m\") pod \"fe06ea2f-15b5-409a-93b8-6c40a629c029\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.179978 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-ssh-key\") pod \"fe06ea2f-15b5-409a-93b8-6c40a629c029\" (UID: \"fe06ea2f-15b5-409a-93b8-6c40a629c029\") " Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.186292 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe06ea2f-15b5-409a-93b8-6c40a629c029-kube-api-access-9t25m" (OuterVolumeSpecName: "kube-api-access-9t25m") pod "fe06ea2f-15b5-409a-93b8-6c40a629c029" (UID: "fe06ea2f-15b5-409a-93b8-6c40a629c029"). InnerVolumeSpecName "kube-api-access-9t25m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.213176 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fe06ea2f-15b5-409a-93b8-6c40a629c029" (UID: "fe06ea2f-15b5-409a-93b8-6c40a629c029"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.242386 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-inventory" (OuterVolumeSpecName: "inventory") pod "fe06ea2f-15b5-409a-93b8-6c40a629c029" (UID: "fe06ea2f-15b5-409a-93b8-6c40a629c029"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.282882 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t25m\" (UniqueName: \"kubernetes.io/projected/fe06ea2f-15b5-409a-93b8-6c40a629c029-kube-api-access-9t25m\") on node \"crc\" DevicePath \"\"" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.282922 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.282935 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe06ea2f-15b5-409a-93b8-6c40a629c029-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.556625 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" event={"ID":"fe06ea2f-15b5-409a-93b8-6c40a629c029","Type":"ContainerDied","Data":"f9857d7d8df959fb461d5f0abfea5bede5770a1fdc8aaad7250c268e35b214fd"} Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.556673 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9857d7d8df959fb461d5f0abfea5bede5770a1fdc8aaad7250c268e35b214fd" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.556707 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.640440 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp"] Nov 21 20:37:19 crc kubenswrapper[4727]: E1121 20:37:19.641187 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerName="extract-content" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.641223 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerName="extract-content" Nov 21 20:37:19 crc kubenswrapper[4727]: E1121 20:37:19.641255 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerName="extract-content" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.641265 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerName="extract-content" Nov 21 20:37:19 crc kubenswrapper[4727]: E1121 20:37:19.641288 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerName="registry-server" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.641298 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerName="registry-server" Nov 21 20:37:19 crc kubenswrapper[4727]: E1121 20:37:19.641315 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerName="extract-utilities" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.641325 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerName="extract-utilities" Nov 21 20:37:19 crc kubenswrapper[4727]: E1121 20:37:19.641341 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerName="extract-utilities" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.641349 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerName="extract-utilities" Nov 21 20:37:19 crc kubenswrapper[4727]: E1121 20:37:19.641363 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerName="registry-server" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.641371 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerName="registry-server" Nov 21 20:37:19 crc kubenswrapper[4727]: E1121 20:37:19.641387 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe06ea2f-15b5-409a-93b8-6c40a629c029" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.641396 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe06ea2f-15b5-409a-93b8-6c40a629c029" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.641736 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e77093-b845-4dbc-928a-5b3de1e55b44" containerName="registry-server" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.641786 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5687cd5c-fc6e-459e-9e7f-87167d4fb4ea" containerName="registry-server" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.641808 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe06ea2f-15b5-409a-93b8-6c40a629c029" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.643069 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.645586 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.647769 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.647995 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.648713 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.651195 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp"] Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.796928 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.797101 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.797209 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs8z8\" (UniqueName: \"kubernetes.io/projected/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-kube-api-access-vs8z8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.899025 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.899158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs8z8\" (UniqueName: \"kubernetes.io/projected/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-kube-api-access-vs8z8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.899588 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.903291 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.905526 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.914216 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs8z8\" (UniqueName: \"kubernetes.io/projected/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-kube-api-access-vs8z8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:19 crc kubenswrapper[4727]: I1121 20:37:19.965391 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:37:20 crc kubenswrapper[4727]: I1121 20:37:20.509462 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp"] Nov 21 20:37:20 crc kubenswrapper[4727]: I1121 20:37:20.516062 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 20:37:20 crc kubenswrapper[4727]: I1121 20:37:20.573843 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" event={"ID":"cc54f580-9f3c-451d-a6c3-a9d5c12c7915","Type":"ContainerStarted","Data":"5f4583ff3b2c18aa0f8814a4dbe4a99313bd3dd874eca9bcd0321911dabdf1a8"} Nov 21 20:37:21 crc kubenswrapper[4727]: I1121 20:37:21.584666 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" event={"ID":"cc54f580-9f3c-451d-a6c3-a9d5c12c7915","Type":"ContainerStarted","Data":"5a9a01d9b0ca4a412d31037e074cb1a21217e67252eccad4f483bd1bae71bc1e"} Nov 21 20:37:21 crc kubenswrapper[4727]: I1121 20:37:21.604225 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" podStartSLOduration=2.209685721 podStartE2EDuration="2.604202825s" podCreationTimestamp="2025-11-21 20:37:19 +0000 UTC" firstStartedPulling="2025-11-21 20:37:20.515729123 +0000 UTC m=+1845.701914167" lastFinishedPulling="2025-11-21 20:37:20.910246227 +0000 UTC m=+1846.096431271" observedRunningTime="2025-11-21 20:37:21.602536725 +0000 UTC m=+1846.788721769" watchObservedRunningTime="2025-11-21 20:37:21.604202825 +0000 UTC m=+1846.790387869" Nov 21 20:37:24 crc kubenswrapper[4727]: I1121 20:37:24.059226 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5z7h7"] Nov 21 20:37:24 crc kubenswrapper[4727]: I1121 20:37:24.076949 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5x7sl"] Nov 21 20:37:24 crc kubenswrapper[4727]: I1121 20:37:24.090862 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5z7h7"] Nov 21 20:37:24 crc kubenswrapper[4727]: I1121 20:37:24.100882 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5x7sl"] Nov 21 20:37:25 crc kubenswrapper[4727]: I1121 20:37:25.543700 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8801f4-f168-4ae7-b364-cd95a72b3a66" path="/var/lib/kubelet/pods/6f8801f4-f168-4ae7-b364-cd95a72b3a66/volumes" Nov 21 20:37:25 crc kubenswrapper[4727]: I1121 20:37:25.545313 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d06d2665-106c-4478-8621-a196d0267ed5" path="/var/lib/kubelet/pods/d06d2665-106c-4478-8621-a196d0267ed5/volumes" Nov 21 20:37:30 crc kubenswrapper[4727]: I1121 20:37:30.499180 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:37:30 crc kubenswrapper[4727]: E1121 20:37:30.499978 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:37:34 crc kubenswrapper[4727]: I1121 20:37:34.032239 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-nvhzz"] Nov 21 20:37:34 crc kubenswrapper[4727]: I1121 20:37:34.043616 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-nvhzz"] Nov 21 20:37:35 crc kubenswrapper[4727]: I1121 20:37:35.030106 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kjphx"] Nov 21 20:37:35 crc kubenswrapper[4727]: I1121 20:37:35.041277 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kjphx"] Nov 21 20:37:35 crc kubenswrapper[4727]: I1121 20:37:35.052511 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-kzpzx"] Nov 21 20:37:35 crc kubenswrapper[4727]: I1121 20:37:35.062447 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-kzpzx"] Nov 21 20:37:35 crc kubenswrapper[4727]: I1121 20:37:35.540974 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3219ae94-1940-49e8-851c-102a14d22e75" path="/var/lib/kubelet/pods/3219ae94-1940-49e8-851c-102a14d22e75/volumes" Nov 21 20:37:35 crc kubenswrapper[4727]: I1121 20:37:35.547642 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5291ec-9ad1-4ce3-8794-6b6ca611b277" path="/var/lib/kubelet/pods/ba5291ec-9ad1-4ce3-8794-6b6ca611b277/volumes" Nov 21 20:37:35 crc kubenswrapper[4727]: I1121 20:37:35.548820 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f819a7d6-b6ab-409c-aaf2-e5044d9317d5" path="/var/lib/kubelet/pods/f819a7d6-b6ab-409c-aaf2-e5044d9317d5/volumes" Nov 21 20:37:45 crc kubenswrapper[4727]: I1121 20:37:45.459637 4727 scope.go:117] "RemoveContainer" containerID="40579ab4c3190291bb8115f01a56949aa96618d929c80bc83463f5d1fa4c4628" Nov 21 20:37:45 crc kubenswrapper[4727]: I1121 20:37:45.488290 4727 scope.go:117] "RemoveContainer" containerID="c5301f9b0f9fa13f65e0573428c3cbfa151b7954125dddc0cc5e98f4c7b39039" Nov 21 20:37:45 crc kubenswrapper[4727]: I1121 20:37:45.508095 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:37:45 crc kubenswrapper[4727]: E1121 20:37:45.508382 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:37:45 crc kubenswrapper[4727]: I1121 20:37:45.549804 4727 scope.go:117] "RemoveContainer" containerID="5cf3b56280d8a5d84f025edc25ea7b41c4359d8994cf362cf2a775c79c9a2942" Nov 21 20:37:45 crc kubenswrapper[4727]: I1121 20:37:45.616693 4727 scope.go:117] "RemoveContainer" containerID="54520a982d3c60ed9d3e47f28be39eea2ecd8ee935c5eea737bd465ccd0236b4" Nov 21 20:37:45 crc kubenswrapper[4727]: I1121 20:37:45.656121 4727 scope.go:117] "RemoveContainer" containerID="173c40fece6202378235c005769078f5a05c9488e0041f7d20fd1dd997109f34" Nov 21 20:37:45 crc kubenswrapper[4727]: I1121 20:37:45.718008 4727 scope.go:117] "RemoveContainer" containerID="f27e6c54022e519d0083b15f3850f40204973d1d4c1fb611daeda5ce2ba58dc6" Nov 21 20:37:58 crc kubenswrapper[4727]: I1121 20:37:58.499670 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:37:58 crc kubenswrapper[4727]: E1121 20:37:58.500524 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:38:09 crc kubenswrapper[4727]: I1121 20:38:09.499994 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:38:09 crc kubenswrapper[4727]: E1121 20:38:09.500836 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:38:22 crc kubenswrapper[4727]: I1121 20:38:22.499670 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:38:23 crc kubenswrapper[4727]: I1121 20:38:23.321872 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"4261fd6f25fa69562f5c96f8f8cb06e32873a62c894efb833ccfecb95710b418"} Nov 21 20:38:26 crc kubenswrapper[4727]: I1121 20:38:26.041089 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xhxc8"] Nov 21 20:38:26 crc kubenswrapper[4727]: I1121 20:38:26.053067 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xhxc8"] Nov 21 20:38:27 crc kubenswrapper[4727]: I1121 20:38:27.527829 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8183461-41a5-4d09-aeef-f3d431e6d71b" path="/var/lib/kubelet/pods/f8183461-41a5-4d09-aeef-f3d431e6d71b/volumes" Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.040592 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xfpm2"] Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.061004 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ae35-account-create-tdvf7"] Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.072198 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-26lpd"] Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.081605 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xfpm2"] Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.090621 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-005d-account-create-g2f2m"] Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.100809 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ae35-account-create-tdvf7"] Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.109728 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-26lpd"] Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.118371 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-005d-account-create-g2f2m"] Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.128882 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2ff4-account-create-ds2tt"] Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.137400 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2ff4-account-create-ds2tt"] Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.377475 4727 generic.go:334] "Generic (PLEG): container finished" podID="cc54f580-9f3c-451d-a6c3-a9d5c12c7915" containerID="5a9a01d9b0ca4a412d31037e074cb1a21217e67252eccad4f483bd1bae71bc1e" exitCode=0 Nov 21 20:38:28 crc kubenswrapper[4727]: I1121 20:38:28.378118 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" event={"ID":"cc54f580-9f3c-451d-a6c3-a9d5c12c7915","Type":"ContainerDied","Data":"5a9a01d9b0ca4a412d31037e074cb1a21217e67252eccad4f483bd1bae71bc1e"} Nov 21 20:38:29 crc kubenswrapper[4727]: I1121 20:38:29.515298 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21368668-5984-4a3f-915a-06c8a959fefd" path="/var/lib/kubelet/pods/21368668-5984-4a3f-915a-06c8a959fefd/volumes" Nov 21 20:38:29 crc kubenswrapper[4727]: I1121 20:38:29.517008 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="436b53f0-6ba8-42e8-89eb-f853b1308cbf" path="/var/lib/kubelet/pods/436b53f0-6ba8-42e8-89eb-f853b1308cbf/volumes" Nov 21 20:38:29 crc kubenswrapper[4727]: I1121 20:38:29.518062 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e425f9-b08d-412d-9491-f1ffe7e5d54f" path="/var/lib/kubelet/pods/57e425f9-b08d-412d-9491-f1ffe7e5d54f/volumes" Nov 21 20:38:29 crc kubenswrapper[4727]: I1121 20:38:29.518885 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d541d6-cea3-49c4-9f5c-d0484de3fafb" path="/var/lib/kubelet/pods/93d541d6-cea3-49c4-9f5c-d0484de3fafb/volumes" Nov 21 20:38:29 crc kubenswrapper[4727]: I1121 20:38:29.520542 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0422bc7-ea04-4d95-b240-608a1c0d16ec" path="/var/lib/kubelet/pods/f0422bc7-ea04-4d95-b240-608a1c0d16ec/volumes" Nov 21 20:38:29 crc kubenswrapper[4727]: I1121 20:38:29.869729 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:38:29 crc kubenswrapper[4727]: I1121 20:38:29.988117 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs8z8\" (UniqueName: \"kubernetes.io/projected/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-kube-api-access-vs8z8\") pod \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " Nov 21 20:38:29 crc kubenswrapper[4727]: I1121 20:38:29.989042 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-inventory\") pod \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " Nov 21 20:38:29 crc kubenswrapper[4727]: I1121 20:38:29.989086 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-ssh-key\") pod \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\" (UID: \"cc54f580-9f3c-451d-a6c3-a9d5c12c7915\") " Nov 21 20:38:29 crc kubenswrapper[4727]: I1121 20:38:29.997291 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-kube-api-access-vs8z8" (OuterVolumeSpecName: "kube-api-access-vs8z8") pod "cc54f580-9f3c-451d-a6c3-a9d5c12c7915" (UID: "cc54f580-9f3c-451d-a6c3-a9d5c12c7915"). InnerVolumeSpecName "kube-api-access-vs8z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.026744 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cc54f580-9f3c-451d-a6c3-a9d5c12c7915" (UID: "cc54f580-9f3c-451d-a6c3-a9d5c12c7915"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.036219 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-inventory" (OuterVolumeSpecName: "inventory") pod "cc54f580-9f3c-451d-a6c3-a9d5c12c7915" (UID: "cc54f580-9f3c-451d-a6c3-a9d5c12c7915"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.091380 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs8z8\" (UniqueName: \"kubernetes.io/projected/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-kube-api-access-vs8z8\") on node \"crc\" DevicePath \"\"" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.091417 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.091429 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc54f580-9f3c-451d-a6c3-a9d5c12c7915-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.403919 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" event={"ID":"cc54f580-9f3c-451d-a6c3-a9d5c12c7915","Type":"ContainerDied","Data":"5f4583ff3b2c18aa0f8814a4dbe4a99313bd3dd874eca9bcd0321911dabdf1a8"} Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.403976 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f4583ff3b2c18aa0f8814a4dbe4a99313bd3dd874eca9bcd0321911dabdf1a8" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.403983 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.509804 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k"] Nov 21 20:38:30 crc kubenswrapper[4727]: E1121 20:38:30.511200 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc54f580-9f3c-451d-a6c3-a9d5c12c7915" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.511241 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc54f580-9f3c-451d-a6c3-a9d5c12c7915" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.511710 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc54f580-9f3c-451d-a6c3-a9d5c12c7915" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.513205 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.515675 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.515905 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.515931 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.521716 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.523786 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k"] Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.615194 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.615594 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.615753 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9s8\" (UniqueName: \"kubernetes.io/projected/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-kube-api-access-4z9s8\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.717085 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.717284 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z9s8\" (UniqueName: \"kubernetes.io/projected/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-kube-api-access-4z9s8\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.717389 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.721523 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.724822 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.734409 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z9s8\" (UniqueName: \"kubernetes.io/projected/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-kube-api-access-4z9s8\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:30 crc kubenswrapper[4727]: I1121 20:38:30.847422 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:31 crc kubenswrapper[4727]: I1121 20:38:31.350371 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k"] Nov 21 20:38:31 crc kubenswrapper[4727]: I1121 20:38:31.416204 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" event={"ID":"e6316777-a5fe-46c7-9c41-5a0bb55f38d7","Type":"ContainerStarted","Data":"10ff57e98980251d8da3997f9ccc0ee629e44ff0a24c36c58364ef1c979a7bb1"} Nov 21 20:38:32 crc kubenswrapper[4727]: I1121 20:38:32.430589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" event={"ID":"e6316777-a5fe-46c7-9c41-5a0bb55f38d7","Type":"ContainerStarted","Data":"f0f683c2a7ddb36638bb6a8cfd285d39dcf31fbac6a2e43e7b0e7140fd523298"} Nov 21 20:38:32 crc kubenswrapper[4727]: I1121 20:38:32.462428 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" podStartSLOduration=2.0579665560000002 podStartE2EDuration="2.462406879s" podCreationTimestamp="2025-11-21 20:38:30 +0000 UTC" firstStartedPulling="2025-11-21 20:38:31.354178821 +0000 UTC m=+1916.540363875" lastFinishedPulling="2025-11-21 20:38:31.758619154 +0000 UTC m=+1916.944804198" observedRunningTime="2025-11-21 20:38:32.446800713 +0000 UTC m=+1917.632985797" watchObservedRunningTime="2025-11-21 20:38:32.462406879 +0000 UTC m=+1917.648591933" Nov 21 20:38:36 crc kubenswrapper[4727]: I1121 20:38:36.482056 4727 generic.go:334] "Generic (PLEG): container finished" podID="e6316777-a5fe-46c7-9c41-5a0bb55f38d7" containerID="f0f683c2a7ddb36638bb6a8cfd285d39dcf31fbac6a2e43e7b0e7140fd523298" exitCode=0 Nov 21 20:38:36 crc kubenswrapper[4727]: I1121 20:38:36.482177 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" event={"ID":"e6316777-a5fe-46c7-9c41-5a0bb55f38d7","Type":"ContainerDied","Data":"f0f683c2a7ddb36638bb6a8cfd285d39dcf31fbac6a2e43e7b0e7140fd523298"} Nov 21 20:38:37 crc kubenswrapper[4727]: I1121 20:38:37.967409 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.021757 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z9s8\" (UniqueName: \"kubernetes.io/projected/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-kube-api-access-4z9s8\") pod \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.022150 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-inventory\") pod \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.022220 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-ssh-key\") pod \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\" (UID: \"e6316777-a5fe-46c7-9c41-5a0bb55f38d7\") " Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.027451 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-kube-api-access-4z9s8" (OuterVolumeSpecName: "kube-api-access-4z9s8") pod "e6316777-a5fe-46c7-9c41-5a0bb55f38d7" (UID: "e6316777-a5fe-46c7-9c41-5a0bb55f38d7"). InnerVolumeSpecName "kube-api-access-4z9s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.054544 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e6316777-a5fe-46c7-9c41-5a0bb55f38d7" (UID: "e6316777-a5fe-46c7-9c41-5a0bb55f38d7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.054607 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-inventory" (OuterVolumeSpecName: "inventory") pod "e6316777-a5fe-46c7-9c41-5a0bb55f38d7" (UID: "e6316777-a5fe-46c7-9c41-5a0bb55f38d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.124343 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.124379 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.124389 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z9s8\" (UniqueName: \"kubernetes.io/projected/e6316777-a5fe-46c7-9c41-5a0bb55f38d7-kube-api-access-4z9s8\") on node \"crc\" DevicePath \"\"" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.506400 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" event={"ID":"e6316777-a5fe-46c7-9c41-5a0bb55f38d7","Type":"ContainerDied","Data":"10ff57e98980251d8da3997f9ccc0ee629e44ff0a24c36c58364ef1c979a7bb1"} Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.506791 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10ff57e98980251d8da3997f9ccc0ee629e44ff0a24c36c58364ef1c979a7bb1" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.506439 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.575145 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl"] Nov 21 20:38:38 crc kubenswrapper[4727]: E1121 20:38:38.575706 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6316777-a5fe-46c7-9c41-5a0bb55f38d7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.575730 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6316777-a5fe-46c7-9c41-5a0bb55f38d7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.575943 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6316777-a5fe-46c7-9c41-5a0bb55f38d7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.576761 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.579374 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.581222 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.581597 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.581779 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.589553 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl"] Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.636753 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l9spl\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.636807 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l9spl\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.637074 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvqnd\" (UniqueName: \"kubernetes.io/projected/27de261d-6864-4d22-8b4a-9523d74fb4fc-kube-api-access-tvqnd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l9spl\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.740082 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvqnd\" (UniqueName: \"kubernetes.io/projected/27de261d-6864-4d22-8b4a-9523d74fb4fc-kube-api-access-tvqnd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l9spl\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.740538 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l9spl\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.740582 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l9spl\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.745600 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l9spl\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.745883 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l9spl\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.757072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvqnd\" (UniqueName: \"kubernetes.io/projected/27de261d-6864-4d22-8b4a-9523d74fb4fc-kube-api-access-tvqnd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-l9spl\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:38 crc kubenswrapper[4727]: I1121 20:38:38.899895 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:38:39 crc kubenswrapper[4727]: I1121 20:38:39.454277 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl"] Nov 21 20:38:39 crc kubenswrapper[4727]: I1121 20:38:39.519718 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" event={"ID":"27de261d-6864-4d22-8b4a-9523d74fb4fc","Type":"ContainerStarted","Data":"23c8065589a181c697ed283200086580f059ce7896498139882bcff6c5302e50"} Nov 21 20:38:40 crc kubenswrapper[4727]: I1121 20:38:40.536266 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" event={"ID":"27de261d-6864-4d22-8b4a-9523d74fb4fc","Type":"ContainerStarted","Data":"5b1daa8c3252046a451ed641096083d79873ed58ff50b68f7fa61a80c170a7b0"} Nov 21 20:38:40 crc kubenswrapper[4727]: I1121 20:38:40.561672 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" podStartSLOduration=2.178096722 podStartE2EDuration="2.561648871s" podCreationTimestamp="2025-11-21 20:38:38 +0000 UTC" firstStartedPulling="2025-11-21 20:38:39.445935964 +0000 UTC m=+1924.632121008" lastFinishedPulling="2025-11-21 20:38:39.829488103 +0000 UTC m=+1925.015673157" observedRunningTime="2025-11-21 20:38:40.553781542 +0000 UTC m=+1925.739966586" watchObservedRunningTime="2025-11-21 20:38:40.561648871 +0000 UTC m=+1925.747833925" Nov 21 20:38:45 crc kubenswrapper[4727]: I1121 20:38:45.877223 4727 scope.go:117] "RemoveContainer" containerID="ccd9b00a1f4907b41fc0a43b80ee1742986ef1ce6c62b1ab0a9c6abf5507bcd3" Nov 21 20:38:45 crc kubenswrapper[4727]: I1121 20:38:45.910666 4727 scope.go:117] "RemoveContainer" containerID="04da160c7d528faa655831430f71cf90cad4ddac09dcba2d1bbb71dee3c8c54d" Nov 21 20:38:45 crc kubenswrapper[4727]: I1121 20:38:45.970873 4727 scope.go:117] "RemoveContainer" containerID="e063c9647e9d3f1e3de8290d9a1bd8b337fb2fb30d664373c5fe6c83b14de23f" Nov 21 20:38:46 crc kubenswrapper[4727]: I1121 20:38:46.018772 4727 scope.go:117] "RemoveContainer" containerID="e71d403f99a706c7f7f28224c7c1725779fc0616cfc86396f02a2e4254c11c2e" Nov 21 20:38:46 crc kubenswrapper[4727]: I1121 20:38:46.080413 4727 scope.go:117] "RemoveContainer" containerID="e49e95a516dccf0f9c8f9b3e98beb68ef4576e3f760b3d2d7822fb9a6aef4062" Nov 21 20:38:46 crc kubenswrapper[4727]: I1121 20:38:46.143752 4727 scope.go:117] "RemoveContainer" containerID="365e5324291bac51a1b24ca68c394eb35d309620a5f5bba3c4b5121f53c50e89" Nov 21 20:38:57 crc kubenswrapper[4727]: I1121 20:38:57.042255 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qh8s2"] Nov 21 20:38:57 crc kubenswrapper[4727]: I1121 20:38:57.052091 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qh8s2"] Nov 21 20:38:57 crc kubenswrapper[4727]: I1121 20:38:57.512260 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff94a3a-a5ae-42b5-a316-e536cf0d3eda" path="/var/lib/kubelet/pods/7ff94a3a-a5ae-42b5-a316-e536cf0d3eda/volumes" Nov 21 20:39:02 crc kubenswrapper[4727]: I1121 20:39:02.032899 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-93ab-account-create-j9jlk"] Nov 21 20:39:02 crc kubenswrapper[4727]: I1121 20:39:02.045741 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-4chdx"] Nov 21 20:39:02 crc kubenswrapper[4727]: I1121 20:39:02.055178 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-4chdx"] Nov 21 20:39:02 crc kubenswrapper[4727]: I1121 20:39:02.065706 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-93ab-account-create-j9jlk"] Nov 21 20:39:03 crc kubenswrapper[4727]: I1121 20:39:03.513861 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0721f70-0de7-476d-8c01-6add6d0767e6" path="/var/lib/kubelet/pods/b0721f70-0de7-476d-8c01-6add6d0767e6/volumes" Nov 21 20:39:03 crc kubenswrapper[4727]: I1121 20:39:03.514854 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da25dd6f-c2ed-4b21-b03b-89652deba65e" path="/var/lib/kubelet/pods/da25dd6f-c2ed-4b21-b03b-89652deba65e/volumes" Nov 21 20:39:17 crc kubenswrapper[4727]: I1121 20:39:17.988576 4727 generic.go:334] "Generic (PLEG): container finished" podID="27de261d-6864-4d22-8b4a-9523d74fb4fc" containerID="5b1daa8c3252046a451ed641096083d79873ed58ff50b68f7fa61a80c170a7b0" exitCode=0 Nov 21 20:39:17 crc kubenswrapper[4727]: I1121 20:39:17.988684 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" event={"ID":"27de261d-6864-4d22-8b4a-9523d74fb4fc","Type":"ContainerDied","Data":"5b1daa8c3252046a451ed641096083d79873ed58ff50b68f7fa61a80c170a7b0"} Nov 21 20:39:19 crc kubenswrapper[4727]: I1121 20:39:19.580790 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:39:19 crc kubenswrapper[4727]: I1121 20:39:19.619757 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-inventory\") pod \"27de261d-6864-4d22-8b4a-9523d74fb4fc\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " Nov 21 20:39:19 crc kubenswrapper[4727]: I1121 20:39:19.619894 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvqnd\" (UniqueName: \"kubernetes.io/projected/27de261d-6864-4d22-8b4a-9523d74fb4fc-kube-api-access-tvqnd\") pod \"27de261d-6864-4d22-8b4a-9523d74fb4fc\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " Nov 21 20:39:19 crc kubenswrapper[4727]: I1121 20:39:19.620068 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-ssh-key\") pod \"27de261d-6864-4d22-8b4a-9523d74fb4fc\" (UID: \"27de261d-6864-4d22-8b4a-9523d74fb4fc\") " Nov 21 20:39:19 crc kubenswrapper[4727]: I1121 20:39:19.626853 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27de261d-6864-4d22-8b4a-9523d74fb4fc-kube-api-access-tvqnd" (OuterVolumeSpecName: "kube-api-access-tvqnd") pod "27de261d-6864-4d22-8b4a-9523d74fb4fc" (UID: "27de261d-6864-4d22-8b4a-9523d74fb4fc"). InnerVolumeSpecName "kube-api-access-tvqnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:39:19 crc kubenswrapper[4727]: I1121 20:39:19.651661 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "27de261d-6864-4d22-8b4a-9523d74fb4fc" (UID: "27de261d-6864-4d22-8b4a-9523d74fb4fc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:39:19 crc kubenswrapper[4727]: I1121 20:39:19.653873 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-inventory" (OuterVolumeSpecName: "inventory") pod "27de261d-6864-4d22-8b4a-9523d74fb4fc" (UID: "27de261d-6864-4d22-8b4a-9523d74fb4fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:39:19 crc kubenswrapper[4727]: I1121 20:39:19.722482 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:39:19 crc kubenswrapper[4727]: I1121 20:39:19.722677 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvqnd\" (UniqueName: \"kubernetes.io/projected/27de261d-6864-4d22-8b4a-9523d74fb4fc-kube-api-access-tvqnd\") on node \"crc\" DevicePath \"\"" Nov 21 20:39:19 crc kubenswrapper[4727]: I1121 20:39:19.722776 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27de261d-6864-4d22-8b4a-9523d74fb4fc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.017383 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" event={"ID":"27de261d-6864-4d22-8b4a-9523d74fb4fc","Type":"ContainerDied","Data":"23c8065589a181c697ed283200086580f059ce7896498139882bcff6c5302e50"} Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.017435 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23c8065589a181c697ed283200086580f059ce7896498139882bcff6c5302e50" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.017503 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-l9spl" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.033384 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ncgt2"] Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.061141 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ncgt2"] Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.100093 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv"] Nov 21 20:39:20 crc kubenswrapper[4727]: E1121 20:39:20.102329 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27de261d-6864-4d22-8b4a-9523d74fb4fc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.102528 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="27de261d-6864-4d22-8b4a-9523d74fb4fc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.102880 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="27de261d-6864-4d22-8b4a-9523d74fb4fc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.103742 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.106490 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.106763 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.106857 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.108613 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.122020 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv"] Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.235398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.235976 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddbjv\" (UniqueName: \"kubernetes.io/projected/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-kube-api-access-ddbjv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.236128 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.338041 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddbjv\" (UniqueName: \"kubernetes.io/projected/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-kube-api-access-ddbjv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.338085 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.338114 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.345114 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.347543 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.357359 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddbjv\" (UniqueName: \"kubernetes.io/projected/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-kube-api-access-ddbjv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.429908 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:39:20 crc kubenswrapper[4727]: I1121 20:39:20.977750 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv"] Nov 21 20:39:21 crc kubenswrapper[4727]: I1121 20:39:21.033714 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" event={"ID":"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434","Type":"ContainerStarted","Data":"ae99231644c7fba733b9485be82483ee105f418f0acdf5ed41c61041cbd83b2a"} Nov 21 20:39:21 crc kubenswrapper[4727]: I1121 20:39:21.036153 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-scscp"] Nov 21 20:39:21 crc kubenswrapper[4727]: I1121 20:39:21.053235 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-scscp"] Nov 21 20:39:21 crc kubenswrapper[4727]: I1121 20:39:21.514300 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="218dc4f4-1ad8-4106-a954-73e6b2e7359f" path="/var/lib/kubelet/pods/218dc4f4-1ad8-4106-a954-73e6b2e7359f/volumes" Nov 21 20:39:21 crc kubenswrapper[4727]: I1121 20:39:21.515030 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515c1bfb-ed4e-4848-837d-aed1c1e5fd53" path="/var/lib/kubelet/pods/515c1bfb-ed4e-4848-837d-aed1c1e5fd53/volumes" Nov 21 20:39:22 crc kubenswrapper[4727]: I1121 20:39:22.049458 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4257c"] Nov 21 20:39:22 crc kubenswrapper[4727]: I1121 20:39:22.065637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" event={"ID":"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434","Type":"ContainerStarted","Data":"ba5d90e031245f63c4876261e5d0354f3622650a5a3a2b70afd824ed322f2fa3"} Nov 21 20:39:22 crc kubenswrapper[4727]: I1121 20:39:22.070885 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4257c"] Nov 21 20:39:22 crc kubenswrapper[4727]: I1121 20:39:22.089249 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" podStartSLOduration=1.5610725140000001 podStartE2EDuration="2.089228183s" podCreationTimestamp="2025-11-21 20:39:20 +0000 UTC" firstStartedPulling="2025-11-21 20:39:20.984833925 +0000 UTC m=+1966.171018969" lastFinishedPulling="2025-11-21 20:39:21.512989584 +0000 UTC m=+1966.699174638" observedRunningTime="2025-11-21 20:39:22.083046214 +0000 UTC m=+1967.269231258" watchObservedRunningTime="2025-11-21 20:39:22.089228183 +0000 UTC m=+1967.275413227" Nov 21 20:39:23 crc kubenswrapper[4727]: I1121 20:39:23.512106 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b71f3e-eabc-4628-ad77-e1e9144d0cda" path="/var/lib/kubelet/pods/a6b71f3e-eabc-4628-ad77-e1e9144d0cda/volumes" Nov 21 20:39:46 crc kubenswrapper[4727]: I1121 20:39:46.319245 4727 scope.go:117] "RemoveContainer" containerID="7c7a8e4daeeae261705248452ccd92d646faeb2c2ee62f5ac40d365ae6b50dc9" Nov 21 20:39:46 crc kubenswrapper[4727]: I1121 20:39:46.353696 4727 scope.go:117] "RemoveContainer" containerID="2e7233c23536b4233184ebbf682296735ecda51cfbd1316779174fbdfcd8bada" Nov 21 20:39:46 crc kubenswrapper[4727]: I1121 20:39:46.421491 4727 scope.go:117] "RemoveContainer" containerID="cebacfb63f66e9a6c0a415ccde1c9322911f3be1572378c0801d63bbe3e284a4" Nov 21 20:39:46 crc kubenswrapper[4727]: I1121 20:39:46.484327 4727 scope.go:117] "RemoveContainer" containerID="cf27b6bb31ea6c822f6a5c2c91e7939a02cbebe6e8c1067510284817a6373251" Nov 21 20:39:46 crc kubenswrapper[4727]: I1121 20:39:46.529529 4727 scope.go:117] "RemoveContainer" containerID="104a5a09a5deab1acf7fa3079e6784225a5aef1f98855e12798972f9483ca623" Nov 21 20:39:46 crc kubenswrapper[4727]: I1121 20:39:46.573191 4727 scope.go:117] "RemoveContainer" containerID="fffad2d770048435343fd82a808677fb96ffe494032222d67fd5a582d6ad8bbd" Nov 21 20:40:07 crc kubenswrapper[4727]: I1121 20:40:07.043226 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtv6f"] Nov 21 20:40:07 crc kubenswrapper[4727]: I1121 20:40:07.055104 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wtv6f"] Nov 21 20:40:07 crc kubenswrapper[4727]: I1121 20:40:07.519219 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84267ae4-f0e0-409a-b261-cc9344c4c47b" path="/var/lib/kubelet/pods/84267ae4-f0e0-409a-b261-cc9344c4c47b/volumes" Nov 21 20:40:10 crc kubenswrapper[4727]: I1121 20:40:10.591407 4727 generic.go:334] "Generic (PLEG): container finished" podID="3e04e3f4-33d0-4dd0-84d9-6ef378cb2434" containerID="ba5d90e031245f63c4876261e5d0354f3622650a5a3a2b70afd824ed322f2fa3" exitCode=0 Nov 21 20:40:10 crc kubenswrapper[4727]: I1121 20:40:10.591509 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" event={"ID":"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434","Type":"ContainerDied","Data":"ba5d90e031245f63c4876261e5d0354f3622650a5a3a2b70afd824ed322f2fa3"} Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.095345 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.262813 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-ssh-key\") pod \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.263387 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-inventory\") pod \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.263676 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddbjv\" (UniqueName: \"kubernetes.io/projected/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-kube-api-access-ddbjv\") pod \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\" (UID: \"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434\") " Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.268940 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-kube-api-access-ddbjv" (OuterVolumeSpecName: "kube-api-access-ddbjv") pod "3e04e3f4-33d0-4dd0-84d9-6ef378cb2434" (UID: "3e04e3f4-33d0-4dd0-84d9-6ef378cb2434"). InnerVolumeSpecName "kube-api-access-ddbjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.298586 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3e04e3f4-33d0-4dd0-84d9-6ef378cb2434" (UID: "3e04e3f4-33d0-4dd0-84d9-6ef378cb2434"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.330408 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-inventory" (OuterVolumeSpecName: "inventory") pod "3e04e3f4-33d0-4dd0-84d9-6ef378cb2434" (UID: "3e04e3f4-33d0-4dd0-84d9-6ef378cb2434"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.366389 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.366436 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddbjv\" (UniqueName: \"kubernetes.io/projected/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-kube-api-access-ddbjv\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.366452 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e04e3f4-33d0-4dd0-84d9-6ef378cb2434-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.617233 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" event={"ID":"3e04e3f4-33d0-4dd0-84d9-6ef378cb2434","Type":"ContainerDied","Data":"ae99231644c7fba733b9485be82483ee105f418f0acdf5ed41c61041cbd83b2a"} Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.617282 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae99231644c7fba733b9485be82483ee105f418f0acdf5ed41c61041cbd83b2a" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.617347 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.736103 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fwp4w"] Nov 21 20:40:12 crc kubenswrapper[4727]: E1121 20:40:12.736616 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e04e3f4-33d0-4dd0-84d9-6ef378cb2434" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.736637 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e04e3f4-33d0-4dd0-84d9-6ef378cb2434" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.736948 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e04e3f4-33d0-4dd0-84d9-6ef378cb2434" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.737804 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.741043 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.741372 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.741619 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.741644 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.750444 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fwp4w"] Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.888066 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9nfk\" (UniqueName: \"kubernetes.io/projected/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-kube-api-access-j9nfk\") pod \"ssh-known-hosts-edpm-deployment-fwp4w\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.888586 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fwp4w\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.889825 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fwp4w\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.991426 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9nfk\" (UniqueName: \"kubernetes.io/projected/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-kube-api-access-j9nfk\") pod \"ssh-known-hosts-edpm-deployment-fwp4w\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.991522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fwp4w\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.991602 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fwp4w\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.996731 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fwp4w\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:12 crc kubenswrapper[4727]: I1121 20:40:12.996778 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fwp4w\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:13 crc kubenswrapper[4727]: I1121 20:40:13.007914 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9nfk\" (UniqueName: \"kubernetes.io/projected/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-kube-api-access-j9nfk\") pod \"ssh-known-hosts-edpm-deployment-fwp4w\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:13 crc kubenswrapper[4727]: I1121 20:40:13.081586 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:13 crc kubenswrapper[4727]: I1121 20:40:13.670585 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fwp4w"] Nov 21 20:40:14 crc kubenswrapper[4727]: I1121 20:40:14.640435 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" event={"ID":"0aa83d35-00bc-4acd-ab0f-1c153bd7130b","Type":"ContainerStarted","Data":"262f5adfebae1504c183b73d7476d6ccb52eb1ebb4967b4661608cf37c55f4fc"} Nov 21 20:40:14 crc kubenswrapper[4727]: I1121 20:40:14.640731 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" event={"ID":"0aa83d35-00bc-4acd-ab0f-1c153bd7130b","Type":"ContainerStarted","Data":"1df5a24c3e36e7a96beffa41aab25f9813cd6499fa0fb880214c73206acb2c1e"} Nov 21 20:40:14 crc kubenswrapper[4727]: I1121 20:40:14.667468 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" podStartSLOduration=2.273092784 podStartE2EDuration="2.667444527s" podCreationTimestamp="2025-11-21 20:40:12 +0000 UTC" firstStartedPulling="2025-11-21 20:40:13.675353287 +0000 UTC m=+2018.861538331" lastFinishedPulling="2025-11-21 20:40:14.06970503 +0000 UTC m=+2019.255890074" observedRunningTime="2025-11-21 20:40:14.652939717 +0000 UTC m=+2019.839124761" watchObservedRunningTime="2025-11-21 20:40:14.667444527 +0000 UTC m=+2019.853629591" Nov 21 20:40:21 crc kubenswrapper[4727]: I1121 20:40:21.718626 4727 generic.go:334] "Generic (PLEG): container finished" podID="0aa83d35-00bc-4acd-ab0f-1c153bd7130b" containerID="262f5adfebae1504c183b73d7476d6ccb52eb1ebb4967b4661608cf37c55f4fc" exitCode=0 Nov 21 20:40:21 crc kubenswrapper[4727]: I1121 20:40:21.718774 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" event={"ID":"0aa83d35-00bc-4acd-ab0f-1c153bd7130b","Type":"ContainerDied","Data":"262f5adfebae1504c183b73d7476d6ccb52eb1ebb4967b4661608cf37c55f4fc"} Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.276689 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.464452 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-ssh-key-openstack-edpm-ipam\") pod \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.464565 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9nfk\" (UniqueName: \"kubernetes.io/projected/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-kube-api-access-j9nfk\") pod \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.464678 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-inventory-0\") pod \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\" (UID: \"0aa83d35-00bc-4acd-ab0f-1c153bd7130b\") " Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.479134 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-kube-api-access-j9nfk" (OuterVolumeSpecName: "kube-api-access-j9nfk") pod "0aa83d35-00bc-4acd-ab0f-1c153bd7130b" (UID: "0aa83d35-00bc-4acd-ab0f-1c153bd7130b"). InnerVolumeSpecName "kube-api-access-j9nfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.506991 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "0aa83d35-00bc-4acd-ab0f-1c153bd7130b" (UID: "0aa83d35-00bc-4acd-ab0f-1c153bd7130b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.543773 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0aa83d35-00bc-4acd-ab0f-1c153bd7130b" (UID: "0aa83d35-00bc-4acd-ab0f-1c153bd7130b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.567648 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.567690 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9nfk\" (UniqueName: \"kubernetes.io/projected/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-kube-api-access-j9nfk\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.567704 4727 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0aa83d35-00bc-4acd-ab0f-1c153bd7130b-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.746196 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" event={"ID":"0aa83d35-00bc-4acd-ab0f-1c153bd7130b","Type":"ContainerDied","Data":"1df5a24c3e36e7a96beffa41aab25f9813cd6499fa0fb880214c73206acb2c1e"} Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.746236 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df5a24c3e36e7a96beffa41aab25f9813cd6499fa0fb880214c73206acb2c1e" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.746273 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fwp4w" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.859982 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78"] Nov 21 20:40:23 crc kubenswrapper[4727]: E1121 20:40:23.860536 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa83d35-00bc-4acd-ab0f-1c153bd7130b" containerName="ssh-known-hosts-edpm-deployment" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.860560 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa83d35-00bc-4acd-ab0f-1c153bd7130b" containerName="ssh-known-hosts-edpm-deployment" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.860863 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa83d35-00bc-4acd-ab0f-1c153bd7130b" containerName="ssh-known-hosts-edpm-deployment" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.861772 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.864874 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.864932 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.865121 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.869490 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.871502 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78"] Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.981085 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-79n78\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.981377 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-79n78\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:23 crc kubenswrapper[4727]: I1121 20:40:23.981537 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwlg7\" (UniqueName: \"kubernetes.io/projected/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-kube-api-access-qwlg7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-79n78\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:24 crc kubenswrapper[4727]: I1121 20:40:24.084122 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-79n78\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:24 crc kubenswrapper[4727]: I1121 20:40:24.084426 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-79n78\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:24 crc kubenswrapper[4727]: I1121 20:40:24.084574 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwlg7\" (UniqueName: \"kubernetes.io/projected/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-kube-api-access-qwlg7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-79n78\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:24 crc kubenswrapper[4727]: I1121 20:40:24.088388 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-79n78\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:24 crc kubenswrapper[4727]: I1121 20:40:24.088573 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-79n78\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:24 crc kubenswrapper[4727]: I1121 20:40:24.100332 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwlg7\" (UniqueName: \"kubernetes.io/projected/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-kube-api-access-qwlg7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-79n78\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:24 crc kubenswrapper[4727]: I1121 20:40:24.183294 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:24 crc kubenswrapper[4727]: I1121 20:40:24.678333 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78"] Nov 21 20:40:24 crc kubenswrapper[4727]: I1121 20:40:24.760755 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" event={"ID":"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768","Type":"ContainerStarted","Data":"336f76074ecf652792848a39ffc94d30158df0f6688d2a343362f756aec190a8"} Nov 21 20:40:25 crc kubenswrapper[4727]: I1121 20:40:25.773268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" event={"ID":"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768","Type":"ContainerStarted","Data":"87d17c3e60bd56fbb272093508dc409efc0ba39611f119d72fc27c23da764526"} Nov 21 20:40:25 crc kubenswrapper[4727]: I1121 20:40:25.792293 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" podStartSLOduration=2.356288961 podStartE2EDuration="2.792273899s" podCreationTimestamp="2025-11-21 20:40:23 +0000 UTC" firstStartedPulling="2025-11-21 20:40:24.686827446 +0000 UTC m=+2029.873012490" lastFinishedPulling="2025-11-21 20:40:25.122812384 +0000 UTC m=+2030.308997428" observedRunningTime="2025-11-21 20:40:25.785345022 +0000 UTC m=+2030.971530066" watchObservedRunningTime="2025-11-21 20:40:25.792273899 +0000 UTC m=+2030.978458943" Nov 21 20:40:33 crc kubenswrapper[4727]: I1121 20:40:33.863482 4727 generic.go:334] "Generic (PLEG): container finished" podID="7ce3b44d-00b3-4cf9-bc3f-325ffb4df768" containerID="87d17c3e60bd56fbb272093508dc409efc0ba39611f119d72fc27c23da764526" exitCode=0 Nov 21 20:40:33 crc kubenswrapper[4727]: I1121 20:40:33.863815 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" event={"ID":"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768","Type":"ContainerDied","Data":"87d17c3e60bd56fbb272093508dc409efc0ba39611f119d72fc27c23da764526"} Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.355485 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.463061 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-inventory\") pod \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.463431 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwlg7\" (UniqueName: \"kubernetes.io/projected/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-kube-api-access-qwlg7\") pod \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.464337 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-ssh-key\") pod \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\" (UID: \"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768\") " Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.470331 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-kube-api-access-qwlg7" (OuterVolumeSpecName: "kube-api-access-qwlg7") pod "7ce3b44d-00b3-4cf9-bc3f-325ffb4df768" (UID: "7ce3b44d-00b3-4cf9-bc3f-325ffb4df768"). InnerVolumeSpecName "kube-api-access-qwlg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.494101 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-inventory" (OuterVolumeSpecName: "inventory") pod "7ce3b44d-00b3-4cf9-bc3f-325ffb4df768" (UID: "7ce3b44d-00b3-4cf9-bc3f-325ffb4df768"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.523664 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7ce3b44d-00b3-4cf9-bc3f-325ffb4df768" (UID: "7ce3b44d-00b3-4cf9-bc3f-325ffb4df768"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.567515 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwlg7\" (UniqueName: \"kubernetes.io/projected/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-kube-api-access-qwlg7\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.567548 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.567558 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce3b44d-00b3-4cf9-bc3f-325ffb4df768-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.892073 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" event={"ID":"7ce3b44d-00b3-4cf9-bc3f-325ffb4df768","Type":"ContainerDied","Data":"336f76074ecf652792848a39ffc94d30158df0f6688d2a343362f756aec190a8"} Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.892369 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="336f76074ecf652792848a39ffc94d30158df0f6688d2a343362f756aec190a8" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.892189 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-79n78" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.967947 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs"] Nov 21 20:40:35 crc kubenswrapper[4727]: E1121 20:40:35.968676 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce3b44d-00b3-4cf9-bc3f-325ffb4df768" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.968701 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce3b44d-00b3-4cf9-bc3f-325ffb4df768" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.968904 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce3b44d-00b3-4cf9-bc3f-325ffb4df768" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.969706 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.974945 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.975295 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.975489 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.975680 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:40:35 crc kubenswrapper[4727]: I1121 20:40:35.994377 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs"] Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.077712 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.077783 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c85sb\" (UniqueName: \"kubernetes.io/projected/3dd3247d-dccc-453a-8bf5-d967039f82e5-kube-api-access-c85sb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.077838 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.179555 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c85sb\" (UniqueName: \"kubernetes.io/projected/3dd3247d-dccc-453a-8bf5-d967039f82e5-kube-api-access-c85sb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.179640 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.179812 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.184347 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.184663 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.197703 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c85sb\" (UniqueName: \"kubernetes.io/projected/3dd3247d-dccc-453a-8bf5-d967039f82e5-kube-api-access-c85sb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.306697 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.851914 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs"] Nov 21 20:40:36 crc kubenswrapper[4727]: I1121 20:40:36.928722 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" event={"ID":"3dd3247d-dccc-453a-8bf5-d967039f82e5","Type":"ContainerStarted","Data":"8af1d104c1df8987db3950a077d765c39658f47898bda3cc7d115a666e99a926"} Nov 21 20:40:37 crc kubenswrapper[4727]: I1121 20:40:37.937981 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" event={"ID":"3dd3247d-dccc-453a-8bf5-d967039f82e5","Type":"ContainerStarted","Data":"34580cc279f9e789a00a4c1fcde4312a1376372d2e7b4cca87b4eff36d3b0e47"} Nov 21 20:40:37 crc kubenswrapper[4727]: I1121 20:40:37.959693 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" podStartSLOduration=2.511731723 podStartE2EDuration="2.959666109s" podCreationTimestamp="2025-11-21 20:40:35 +0000 UTC" firstStartedPulling="2025-11-21 20:40:36.854330148 +0000 UTC m=+2042.040515222" lastFinishedPulling="2025-11-21 20:40:37.302264564 +0000 UTC m=+2042.488449608" observedRunningTime="2025-11-21 20:40:37.95227294 +0000 UTC m=+2043.138457984" watchObservedRunningTime="2025-11-21 20:40:37.959666109 +0000 UTC m=+2043.145851173" Nov 21 20:40:43 crc kubenswrapper[4727]: I1121 20:40:43.335255 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:40:43 crc kubenswrapper[4727]: I1121 20:40:43.335851 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:40:46 crc kubenswrapper[4727]: I1121 20:40:46.735252 4727 scope.go:117] "RemoveContainer" containerID="832c9b9c75b623f0eff4096044448f6cd1876795d47a0563d1d16af0e6361337" Nov 21 20:40:47 crc kubenswrapper[4727]: I1121 20:40:47.048147 4727 generic.go:334] "Generic (PLEG): container finished" podID="3dd3247d-dccc-453a-8bf5-d967039f82e5" containerID="34580cc279f9e789a00a4c1fcde4312a1376372d2e7b4cca87b4eff36d3b0e47" exitCode=0 Nov 21 20:40:47 crc kubenswrapper[4727]: I1121 20:40:47.048197 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" event={"ID":"3dd3247d-dccc-453a-8bf5-d967039f82e5","Type":"ContainerDied","Data":"34580cc279f9e789a00a4c1fcde4312a1376372d2e7b4cca87b4eff36d3b0e47"} Nov 21 20:40:48 crc kubenswrapper[4727]: I1121 20:40:48.574302 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:48 crc kubenswrapper[4727]: I1121 20:40:48.670689 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-inventory\") pod \"3dd3247d-dccc-453a-8bf5-d967039f82e5\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " Nov 21 20:40:48 crc kubenswrapper[4727]: I1121 20:40:48.670863 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c85sb\" (UniqueName: \"kubernetes.io/projected/3dd3247d-dccc-453a-8bf5-d967039f82e5-kube-api-access-c85sb\") pod \"3dd3247d-dccc-453a-8bf5-d967039f82e5\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " Nov 21 20:40:48 crc kubenswrapper[4727]: I1121 20:40:48.670954 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-ssh-key\") pod \"3dd3247d-dccc-453a-8bf5-d967039f82e5\" (UID: \"3dd3247d-dccc-453a-8bf5-d967039f82e5\") " Nov 21 20:40:48 crc kubenswrapper[4727]: I1121 20:40:48.676506 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd3247d-dccc-453a-8bf5-d967039f82e5-kube-api-access-c85sb" (OuterVolumeSpecName: "kube-api-access-c85sb") pod "3dd3247d-dccc-453a-8bf5-d967039f82e5" (UID: "3dd3247d-dccc-453a-8bf5-d967039f82e5"). InnerVolumeSpecName "kube-api-access-c85sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:40:48 crc kubenswrapper[4727]: I1121 20:40:48.704150 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-inventory" (OuterVolumeSpecName: "inventory") pod "3dd3247d-dccc-453a-8bf5-d967039f82e5" (UID: "3dd3247d-dccc-453a-8bf5-d967039f82e5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:40:48 crc kubenswrapper[4727]: I1121 20:40:48.706768 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3dd3247d-dccc-453a-8bf5-d967039f82e5" (UID: "3dd3247d-dccc-453a-8bf5-d967039f82e5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:40:48 crc kubenswrapper[4727]: I1121 20:40:48.774377 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:48 crc kubenswrapper[4727]: I1121 20:40:48.774416 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c85sb\" (UniqueName: \"kubernetes.io/projected/3dd3247d-dccc-453a-8bf5-d967039f82e5-kube-api-access-c85sb\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:48 crc kubenswrapper[4727]: I1121 20:40:48.774434 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3dd3247d-dccc-453a-8bf5-d967039f82e5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.068062 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" event={"ID":"3dd3247d-dccc-453a-8bf5-d967039f82e5","Type":"ContainerDied","Data":"8af1d104c1df8987db3950a077d765c39658f47898bda3cc7d115a666e99a926"} Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.068150 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af1d104c1df8987db3950a077d765c39658f47898bda3cc7d115a666e99a926" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.068234 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.155880 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8"] Nov 21 20:40:49 crc kubenswrapper[4727]: E1121 20:40:49.156387 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd3247d-dccc-453a-8bf5-d967039f82e5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.156405 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd3247d-dccc-453a-8bf5-d967039f82e5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.156642 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dd3247d-dccc-453a-8bf5-d967039f82e5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.157441 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.159496 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.159977 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.161644 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.162762 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.163121 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.163287 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.163488 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.165746 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.168366 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.171105 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8"] Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.286298 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njd9b\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-kube-api-access-njd9b\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.286654 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.286779 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.286938 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.287051 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.287241 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.287373 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.287465 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.287594 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.287686 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.287806 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.287914 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.288096 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.288190 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.288330 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.288483 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391071 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391130 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391193 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391232 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391283 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391309 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391329 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391359 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391406 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391427 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391472 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391490 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391513 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391555 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.391576 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njd9b\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-kube-api-access-njd9b\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.395640 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.396775 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.396870 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.396919 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.397075 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.398287 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.398309 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.399330 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.412516 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.412794 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.414592 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.414624 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.415162 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.415165 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njd9b\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-kube-api-access-njd9b\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.426947 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.429745 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kffp8\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:49 crc kubenswrapper[4727]: I1121 20:40:49.478086 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:40:50 crc kubenswrapper[4727]: I1121 20:40:50.059892 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8"] Nov 21 20:40:50 crc kubenswrapper[4727]: I1121 20:40:50.086630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" event={"ID":"8f6ac347-e773-4986-bf2b-54a1bb00047f","Type":"ContainerStarted","Data":"b8cafcbe899266ebcd65b955fcb81b2d56d4d78ea3036bf38a7e81f24bae1483"} Nov 21 20:40:51 crc kubenswrapper[4727]: I1121 20:40:51.098608 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" event={"ID":"8f6ac347-e773-4986-bf2b-54a1bb00047f","Type":"ContainerStarted","Data":"9f4ea2e04aa1a54dde7cfe564385da21bfbd7c92d87d24274aa611ca42b23994"} Nov 21 20:40:51 crc kubenswrapper[4727]: I1121 20:40:51.127384 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" podStartSLOduration=1.695842627 podStartE2EDuration="2.127364387s" podCreationTimestamp="2025-11-21 20:40:49 +0000 UTC" firstStartedPulling="2025-11-21 20:40:50.072616857 +0000 UTC m=+2055.258801901" lastFinishedPulling="2025-11-21 20:40:50.504138617 +0000 UTC m=+2055.690323661" observedRunningTime="2025-11-21 20:40:51.125488032 +0000 UTC m=+2056.311673086" watchObservedRunningTime="2025-11-21 20:40:51.127364387 +0000 UTC m=+2056.313549421" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.073425 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z2ktm"] Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.076949 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.083757 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2ktm"] Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.192698 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-catalog-content\") pod \"community-operators-z2ktm\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.192794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b42xg\" (UniqueName: \"kubernetes.io/projected/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-kube-api-access-b42xg\") pod \"community-operators-z2ktm\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.192883 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-utilities\") pod \"community-operators-z2ktm\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.296354 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-catalog-content\") pod \"community-operators-z2ktm\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.296475 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b42xg\" (UniqueName: \"kubernetes.io/projected/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-kube-api-access-b42xg\") pod \"community-operators-z2ktm\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.296607 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-utilities\") pod \"community-operators-z2ktm\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.297237 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-catalog-content\") pod \"community-operators-z2ktm\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.297279 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-utilities\") pod \"community-operators-z2ktm\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.319700 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b42xg\" (UniqueName: \"kubernetes.io/projected/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-kube-api-access-b42xg\") pod \"community-operators-z2ktm\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:02 crc kubenswrapper[4727]: I1121 20:41:02.409875 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:03 crc kubenswrapper[4727]: I1121 20:41:03.069614 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2ktm"] Nov 21 20:41:03 crc kubenswrapper[4727]: I1121 20:41:03.226344 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2ktm" event={"ID":"d1a0eb65-e5d5-4568-a3d5-8b69c9912970","Type":"ContainerStarted","Data":"64b6b4e93cfba5b32832a44cb4b9f6acea22035b7f2228db8785da24770f3d76"} Nov 21 20:41:04 crc kubenswrapper[4727]: I1121 20:41:04.248800 4727 generic.go:334] "Generic (PLEG): container finished" podID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerID="80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af" exitCode=0 Nov 21 20:41:04 crc kubenswrapper[4727]: I1121 20:41:04.249082 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2ktm" event={"ID":"d1a0eb65-e5d5-4568-a3d5-8b69c9912970","Type":"ContainerDied","Data":"80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af"} Nov 21 20:41:05 crc kubenswrapper[4727]: I1121 20:41:05.274314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2ktm" event={"ID":"d1a0eb65-e5d5-4568-a3d5-8b69c9912970","Type":"ContainerStarted","Data":"411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f"} Nov 21 20:41:07 crc kubenswrapper[4727]: I1121 20:41:07.296064 4727 generic.go:334] "Generic (PLEG): container finished" podID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerID="411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f" exitCode=0 Nov 21 20:41:07 crc kubenswrapper[4727]: I1121 20:41:07.296152 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2ktm" event={"ID":"d1a0eb65-e5d5-4568-a3d5-8b69c9912970","Type":"ContainerDied","Data":"411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f"} Nov 21 20:41:08 crc kubenswrapper[4727]: I1121 20:41:08.309154 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2ktm" event={"ID":"d1a0eb65-e5d5-4568-a3d5-8b69c9912970","Type":"ContainerStarted","Data":"96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7"} Nov 21 20:41:08 crc kubenswrapper[4727]: I1121 20:41:08.326414 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z2ktm" podStartSLOduration=2.908009378 podStartE2EDuration="6.32639062s" podCreationTimestamp="2025-11-21 20:41:02 +0000 UTC" firstStartedPulling="2025-11-21 20:41:04.255256736 +0000 UTC m=+2069.441441810" lastFinishedPulling="2025-11-21 20:41:07.673638008 +0000 UTC m=+2072.859823052" observedRunningTime="2025-11-21 20:41:08.326039542 +0000 UTC m=+2073.512224586" watchObservedRunningTime="2025-11-21 20:41:08.32639062 +0000 UTC m=+2073.512575664" Nov 21 20:41:12 crc kubenswrapper[4727]: I1121 20:41:12.410860 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:12 crc kubenswrapper[4727]: I1121 20:41:12.411319 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:12 crc kubenswrapper[4727]: I1121 20:41:12.473038 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:13 crc kubenswrapper[4727]: I1121 20:41:13.335098 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:41:13 crc kubenswrapper[4727]: I1121 20:41:13.335460 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:41:13 crc kubenswrapper[4727]: I1121 20:41:13.412973 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:13 crc kubenswrapper[4727]: I1121 20:41:13.462404 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z2ktm"] Nov 21 20:41:15 crc kubenswrapper[4727]: I1121 20:41:15.386935 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z2ktm" podUID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerName="registry-server" containerID="cri-o://96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7" gracePeriod=2 Nov 21 20:41:15 crc kubenswrapper[4727]: I1121 20:41:15.964003 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.150533 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b42xg\" (UniqueName: \"kubernetes.io/projected/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-kube-api-access-b42xg\") pod \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.151110 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-catalog-content\") pod \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.151163 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-utilities\") pod \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\" (UID: \"d1a0eb65-e5d5-4568-a3d5-8b69c9912970\") " Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.152220 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-utilities" (OuterVolumeSpecName: "utilities") pod "d1a0eb65-e5d5-4568-a3d5-8b69c9912970" (UID: "d1a0eb65-e5d5-4568-a3d5-8b69c9912970"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.156662 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-kube-api-access-b42xg" (OuterVolumeSpecName: "kube-api-access-b42xg") pod "d1a0eb65-e5d5-4568-a3d5-8b69c9912970" (UID: "d1a0eb65-e5d5-4568-a3d5-8b69c9912970"). InnerVolumeSpecName "kube-api-access-b42xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.211368 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1a0eb65-e5d5-4568-a3d5-8b69c9912970" (UID: "d1a0eb65-e5d5-4568-a3d5-8b69c9912970"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.254002 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b42xg\" (UniqueName: \"kubernetes.io/projected/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-kube-api-access-b42xg\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.254270 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.254347 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a0eb65-e5d5-4568-a3d5-8b69c9912970-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.401631 4727 generic.go:334] "Generic (PLEG): container finished" podID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerID="96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7" exitCode=0 Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.401703 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2ktm" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.401702 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2ktm" event={"ID":"d1a0eb65-e5d5-4568-a3d5-8b69c9912970","Type":"ContainerDied","Data":"96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7"} Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.402711 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2ktm" event={"ID":"d1a0eb65-e5d5-4568-a3d5-8b69c9912970","Type":"ContainerDied","Data":"64b6b4e93cfba5b32832a44cb4b9f6acea22035b7f2228db8785da24770f3d76"} Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.402735 4727 scope.go:117] "RemoveContainer" containerID="96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.426423 4727 scope.go:117] "RemoveContainer" containerID="411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.443520 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z2ktm"] Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.454833 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z2ktm"] Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.473217 4727 scope.go:117] "RemoveContainer" containerID="80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.502880 4727 scope.go:117] "RemoveContainer" containerID="96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7" Nov 21 20:41:16 crc kubenswrapper[4727]: E1121 20:41:16.504311 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7\": container with ID starting with 96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7 not found: ID does not exist" containerID="96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.504346 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7"} err="failed to get container status \"96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7\": rpc error: code = NotFound desc = could not find container \"96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7\": container with ID starting with 96bee21b3633a9845a9d7d1a73f60064a2725770991527bc67b19cba75fcd6b7 not found: ID does not exist" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.504372 4727 scope.go:117] "RemoveContainer" containerID="411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f" Nov 21 20:41:16 crc kubenswrapper[4727]: E1121 20:41:16.504703 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f\": container with ID starting with 411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f not found: ID does not exist" containerID="411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.504743 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f"} err="failed to get container status \"411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f\": rpc error: code = NotFound desc = could not find container \"411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f\": container with ID starting with 411a5e30fe46e2704897c9460446e6f563e8d5f013e5896f52ceb108520d2a4f not found: ID does not exist" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.504769 4727 scope.go:117] "RemoveContainer" containerID="80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af" Nov 21 20:41:16 crc kubenswrapper[4727]: E1121 20:41:16.505134 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af\": container with ID starting with 80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af not found: ID does not exist" containerID="80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af" Nov 21 20:41:16 crc kubenswrapper[4727]: I1121 20:41:16.505152 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af"} err="failed to get container status \"80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af\": rpc error: code = NotFound desc = could not find container \"80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af\": container with ID starting with 80372914b9fccc8de8e76b9879c85cbf35bf09760c63045db3bef967897197af not found: ID does not exist" Nov 21 20:41:17 crc kubenswrapper[4727]: I1121 20:41:17.513399 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" path="/var/lib/kubelet/pods/d1a0eb65-e5d5-4568-a3d5-8b69c9912970/volumes" Nov 21 20:41:35 crc kubenswrapper[4727]: I1121 20:41:35.631768 4727 generic.go:334] "Generic (PLEG): container finished" podID="8f6ac347-e773-4986-bf2b-54a1bb00047f" containerID="9f4ea2e04aa1a54dde7cfe564385da21bfbd7c92d87d24274aa611ca42b23994" exitCode=0 Nov 21 20:41:35 crc kubenswrapper[4727]: I1121 20:41:35.632405 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" event={"ID":"8f6ac347-e773-4986-bf2b-54a1bb00047f","Type":"ContainerDied","Data":"9f4ea2e04aa1a54dde7cfe564385da21bfbd7c92d87d24274aa611ca42b23994"} Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.209370 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355172 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ssh-key\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355393 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355477 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355504 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njd9b\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-kube-api-access-njd9b\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355562 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-bootstrap-combined-ca-bundle\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355582 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-libvirt-combined-ca-bundle\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355628 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-neutron-metadata-combined-ca-bundle\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355664 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355716 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-repo-setup-combined-ca-bundle\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355771 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-inventory\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355787 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355821 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-power-monitoring-combined-ca-bundle\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355872 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-nova-combined-ca-bundle\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355929 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-combined-ca-bundle\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.355985 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ovn-combined-ca-bundle\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.356016 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"8f6ac347-e773-4986-bf2b-54a1bb00047f\" (UID: \"8f6ac347-e773-4986-bf2b-54a1bb00047f\") " Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.365363 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.365712 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.367919 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.368059 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.368174 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.368371 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.371394 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.372419 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.372479 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.372710 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-kube-api-access-njd9b" (OuterVolumeSpecName: "kube-api-access-njd9b") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "kube-api-access-njd9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.373830 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.374156 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.374639 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.392492 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.409624 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-inventory" (OuterVolumeSpecName: "inventory") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.419297 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8f6ac347-e773-4986-bf2b-54a1bb00047f" (UID: "8f6ac347-e773-4986-bf2b-54a1bb00047f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458362 4727 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458397 4727 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458407 4727 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458416 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458425 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458434 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458443 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458453 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458463 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njd9b\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-kube-api-access-njd9b\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458471 4727 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458481 4727 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458489 4727 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458500 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458511 4727 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458519 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f6ac347-e773-4986-bf2b-54a1bb00047f-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.458530 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/8f6ac347-e773-4986-bf2b-54a1bb00047f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.653445 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" event={"ID":"8f6ac347-e773-4986-bf2b-54a1bb00047f","Type":"ContainerDied","Data":"b8cafcbe899266ebcd65b955fcb81b2d56d4d78ea3036bf38a7e81f24bae1483"} Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.653780 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8cafcbe899266ebcd65b955fcb81b2d56d4d78ea3036bf38a7e81f24bae1483" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.653525 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kffp8" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.843148 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c"] Nov 21 20:41:37 crc kubenswrapper[4727]: E1121 20:41:37.843684 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerName="extract-content" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.843702 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerName="extract-content" Nov 21 20:41:37 crc kubenswrapper[4727]: E1121 20:41:37.843734 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f6ac347-e773-4986-bf2b-54a1bb00047f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.843742 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f6ac347-e773-4986-bf2b-54a1bb00047f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 20:41:37 crc kubenswrapper[4727]: E1121 20:41:37.843755 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerName="extract-utilities" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.843761 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerName="extract-utilities" Nov 21 20:41:37 crc kubenswrapper[4727]: E1121 20:41:37.843788 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerName="registry-server" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.843794 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerName="registry-server" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.844033 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f6ac347-e773-4986-bf2b-54a1bb00047f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.844056 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a0eb65-e5d5-4568-a3d5-8b69c9912970" containerName="registry-server" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.844816 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.847699 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.847850 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.847976 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.849601 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.850128 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.859942 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c"] Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.972180 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.972280 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqnw\" (UniqueName: \"kubernetes.io/projected/06a92e2a-daad-494e-93df-d9c943c574d3-kube-api-access-4tqnw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.972378 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/06a92e2a-daad-494e-93df-d9c943c574d3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.972430 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:37 crc kubenswrapper[4727]: I1121 20:41:37.972500 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.074625 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.074729 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tqnw\" (UniqueName: \"kubernetes.io/projected/06a92e2a-daad-494e-93df-d9c943c574d3-kube-api-access-4tqnw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.074775 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/06a92e2a-daad-494e-93df-d9c943c574d3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.074833 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.074924 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.076037 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/06a92e2a-daad-494e-93df-d9c943c574d3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.081896 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.082423 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.083932 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.107313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tqnw\" (UniqueName: \"kubernetes.io/projected/06a92e2a-daad-494e-93df-d9c943c574d3-kube-api-access-4tqnw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7667c\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.173683 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:41:38 crc kubenswrapper[4727]: I1121 20:41:38.726443 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c"] Nov 21 20:41:39 crc kubenswrapper[4727]: I1121 20:41:39.685238 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" event={"ID":"06a92e2a-daad-494e-93df-d9c943c574d3","Type":"ContainerStarted","Data":"f53f5b01d059b546cdf63d778dfb58fb7fcb3bf40792f71645f2a6f84cf7844b"} Nov 21 20:41:39 crc kubenswrapper[4727]: I1121 20:41:39.685793 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" event={"ID":"06a92e2a-daad-494e-93df-d9c943c574d3","Type":"ContainerStarted","Data":"fc06cf72791f9f3daf9cb8645eac67efc6dd2c2eab990d5189544154098a7a48"} Nov 21 20:41:39 crc kubenswrapper[4727]: I1121 20:41:39.705001 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" podStartSLOduration=2.238693918 podStartE2EDuration="2.704984524s" podCreationTimestamp="2025-11-21 20:41:37 +0000 UTC" firstStartedPulling="2025-11-21 20:41:38.717265814 +0000 UTC m=+2103.903450868" lastFinishedPulling="2025-11-21 20:41:39.18355643 +0000 UTC m=+2104.369741474" observedRunningTime="2025-11-21 20:41:39.703098609 +0000 UTC m=+2104.889283653" watchObservedRunningTime="2025-11-21 20:41:39.704984524 +0000 UTC m=+2104.891169568" Nov 21 20:41:43 crc kubenswrapper[4727]: I1121 20:41:43.335104 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:41:43 crc kubenswrapper[4727]: I1121 20:41:43.335540 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:41:43 crc kubenswrapper[4727]: I1121 20:41:43.335605 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:41:43 crc kubenswrapper[4727]: I1121 20:41:43.337099 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4261fd6f25fa69562f5c96f8f8cb06e32873a62c894efb833ccfecb95710b418"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:41:43 crc kubenswrapper[4727]: I1121 20:41:43.337281 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://4261fd6f25fa69562f5c96f8f8cb06e32873a62c894efb833ccfecb95710b418" gracePeriod=600 Nov 21 20:41:43 crc kubenswrapper[4727]: I1121 20:41:43.727210 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="4261fd6f25fa69562f5c96f8f8cb06e32873a62c894efb833ccfecb95710b418" exitCode=0 Nov 21 20:41:43 crc kubenswrapper[4727]: I1121 20:41:43.727296 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"4261fd6f25fa69562f5c96f8f8cb06e32873a62c894efb833ccfecb95710b418"} Nov 21 20:41:43 crc kubenswrapper[4727]: I1121 20:41:43.727676 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0"} Nov 21 20:41:43 crc kubenswrapper[4727]: I1121 20:41:43.727699 4727 scope.go:117] "RemoveContainer" containerID="586a1c0353caf0aeb822b92be481b941c0fdb34b453b8251021ce7365988e0c9" Nov 21 20:42:39 crc kubenswrapper[4727]: I1121 20:42:39.390624 4727 generic.go:334] "Generic (PLEG): container finished" podID="06a92e2a-daad-494e-93df-d9c943c574d3" containerID="f53f5b01d059b546cdf63d778dfb58fb7fcb3bf40792f71645f2a6f84cf7844b" exitCode=0 Nov 21 20:42:39 crc kubenswrapper[4727]: I1121 20:42:39.390730 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" event={"ID":"06a92e2a-daad-494e-93df-d9c943c574d3","Type":"ContainerDied","Data":"f53f5b01d059b546cdf63d778dfb58fb7fcb3bf40792f71645f2a6f84cf7844b"} Nov 21 20:42:40 crc kubenswrapper[4727]: I1121 20:42:40.867774 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.068649 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/06a92e2a-daad-494e-93df-d9c943c574d3-ovncontroller-config-0\") pod \"06a92e2a-daad-494e-93df-d9c943c574d3\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.068693 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tqnw\" (UniqueName: \"kubernetes.io/projected/06a92e2a-daad-494e-93df-d9c943c574d3-kube-api-access-4tqnw\") pod \"06a92e2a-daad-494e-93df-d9c943c574d3\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.068766 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ssh-key\") pod \"06a92e2a-daad-494e-93df-d9c943c574d3\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.068820 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ovn-combined-ca-bundle\") pod \"06a92e2a-daad-494e-93df-d9c943c574d3\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.068937 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-inventory\") pod \"06a92e2a-daad-494e-93df-d9c943c574d3\" (UID: \"06a92e2a-daad-494e-93df-d9c943c574d3\") " Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.081321 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a92e2a-daad-494e-93df-d9c943c574d3-kube-api-access-4tqnw" (OuterVolumeSpecName: "kube-api-access-4tqnw") pod "06a92e2a-daad-494e-93df-d9c943c574d3" (UID: "06a92e2a-daad-494e-93df-d9c943c574d3"). InnerVolumeSpecName "kube-api-access-4tqnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.083031 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "06a92e2a-daad-494e-93df-d9c943c574d3" (UID: "06a92e2a-daad-494e-93df-d9c943c574d3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.105224 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06a92e2a-daad-494e-93df-d9c943c574d3" (UID: "06a92e2a-daad-494e-93df-d9c943c574d3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.110187 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-inventory" (OuterVolumeSpecName: "inventory") pod "06a92e2a-daad-494e-93df-d9c943c574d3" (UID: "06a92e2a-daad-494e-93df-d9c943c574d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.120171 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a92e2a-daad-494e-93df-d9c943c574d3-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "06a92e2a-daad-494e-93df-d9c943c574d3" (UID: "06a92e2a-daad-494e-93df-d9c943c574d3"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.186135 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.186553 4727 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.186572 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a92e2a-daad-494e-93df-d9c943c574d3-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.186594 4727 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/06a92e2a-daad-494e-93df-d9c943c574d3-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.186606 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tqnw\" (UniqueName: \"kubernetes.io/projected/06a92e2a-daad-494e-93df-d9c943c574d3-kube-api-access-4tqnw\") on node \"crc\" DevicePath \"\"" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.413075 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" event={"ID":"06a92e2a-daad-494e-93df-d9c943c574d3","Type":"ContainerDied","Data":"fc06cf72791f9f3daf9cb8645eac67efc6dd2c2eab990d5189544154098a7a48"} Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.413117 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc06cf72791f9f3daf9cb8645eac67efc6dd2c2eab990d5189544154098a7a48" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.413171 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7667c" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.516094 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k"] Nov 21 20:42:41 crc kubenswrapper[4727]: E1121 20:42:41.516619 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a92e2a-daad-494e-93df-d9c943c574d3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.516641 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a92e2a-daad-494e-93df-d9c943c574d3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.516893 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a92e2a-daad-494e-93df-d9c943c574d3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.517861 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.522278 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.522587 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.522773 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.522941 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.524390 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.526951 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.538669 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k"] Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.597093 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.597155 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.597253 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.597373 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tx9g\" (UniqueName: \"kubernetes.io/projected/945a8b1a-2ae7-449a-b7b5-206b435b2d19-kube-api-access-5tx9g\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.597447 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.597572 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.699515 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tx9g\" (UniqueName: \"kubernetes.io/projected/945a8b1a-2ae7-449a-b7b5-206b435b2d19-kube-api-access-5tx9g\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.699915 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.700101 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.700204 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.700374 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.700587 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.704324 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.704729 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.706396 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.706702 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.709408 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.715902 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tx9g\" (UniqueName: \"kubernetes.io/projected/945a8b1a-2ae7-449a-b7b5-206b435b2d19-kube-api-access-5tx9g\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:41 crc kubenswrapper[4727]: I1121 20:42:41.840881 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:42:42 crc kubenswrapper[4727]: I1121 20:42:42.445382 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k"] Nov 21 20:42:42 crc kubenswrapper[4727]: I1121 20:42:42.449036 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 20:42:43 crc kubenswrapper[4727]: I1121 20:42:43.435508 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" event={"ID":"945a8b1a-2ae7-449a-b7b5-206b435b2d19","Type":"ContainerStarted","Data":"e97e4c81a265e8767a214c642f7aa198adbd1e732ccee744736b3633406d77f2"} Nov 21 20:42:43 crc kubenswrapper[4727]: I1121 20:42:43.435791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" event={"ID":"945a8b1a-2ae7-449a-b7b5-206b435b2d19","Type":"ContainerStarted","Data":"9188111b12bc2dd92deb0ec432323597e8013dedc6d70b039ae987dfad2b71ad"} Nov 21 20:42:43 crc kubenswrapper[4727]: I1121 20:42:43.468737 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" podStartSLOduration=2.00418133 podStartE2EDuration="2.468713283s" podCreationTimestamp="2025-11-21 20:42:41 +0000 UTC" firstStartedPulling="2025-11-21 20:42:42.448701019 +0000 UTC m=+2167.634886073" lastFinishedPulling="2025-11-21 20:42:42.913232962 +0000 UTC m=+2168.099418026" observedRunningTime="2025-11-21 20:42:43.455223976 +0000 UTC m=+2168.641409020" watchObservedRunningTime="2025-11-21 20:42:43.468713283 +0000 UTC m=+2168.654898337" Nov 21 20:43:28 crc kubenswrapper[4727]: I1121 20:43:28.948482 4727 generic.go:334] "Generic (PLEG): container finished" podID="945a8b1a-2ae7-449a-b7b5-206b435b2d19" containerID="e97e4c81a265e8767a214c642f7aa198adbd1e732ccee744736b3633406d77f2" exitCode=0 Nov 21 20:43:28 crc kubenswrapper[4727]: I1121 20:43:28.948586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" event={"ID":"945a8b1a-2ae7-449a-b7b5-206b435b2d19","Type":"ContainerDied","Data":"e97e4c81a265e8767a214c642f7aa198adbd1e732ccee744736b3633406d77f2"} Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.445160 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.515652 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-nova-metadata-neutron-config-0\") pod \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.515975 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-metadata-combined-ca-bundle\") pod \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.516011 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-inventory\") pod \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.516087 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-ovn-metadata-agent-neutron-config-0\") pod \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.516128 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-ssh-key\") pod \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.516181 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tx9g\" (UniqueName: \"kubernetes.io/projected/945a8b1a-2ae7-449a-b7b5-206b435b2d19-kube-api-access-5tx9g\") pod \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\" (UID: \"945a8b1a-2ae7-449a-b7b5-206b435b2d19\") " Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.522997 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945a8b1a-2ae7-449a-b7b5-206b435b2d19-kube-api-access-5tx9g" (OuterVolumeSpecName: "kube-api-access-5tx9g") pod "945a8b1a-2ae7-449a-b7b5-206b435b2d19" (UID: "945a8b1a-2ae7-449a-b7b5-206b435b2d19"). InnerVolumeSpecName "kube-api-access-5tx9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.544809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "945a8b1a-2ae7-449a-b7b5-206b435b2d19" (UID: "945a8b1a-2ae7-449a-b7b5-206b435b2d19"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.552722 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "945a8b1a-2ae7-449a-b7b5-206b435b2d19" (UID: "945a8b1a-2ae7-449a-b7b5-206b435b2d19"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.561307 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "945a8b1a-2ae7-449a-b7b5-206b435b2d19" (UID: "945a8b1a-2ae7-449a-b7b5-206b435b2d19"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.561553 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-inventory" (OuterVolumeSpecName: "inventory") pod "945a8b1a-2ae7-449a-b7b5-206b435b2d19" (UID: "945a8b1a-2ae7-449a-b7b5-206b435b2d19"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.594382 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "945a8b1a-2ae7-449a-b7b5-206b435b2d19" (UID: "945a8b1a-2ae7-449a-b7b5-206b435b2d19"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.620114 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tx9g\" (UniqueName: \"kubernetes.io/projected/945a8b1a-2ae7-449a-b7b5-206b435b2d19-kube-api-access-5tx9g\") on node \"crc\" DevicePath \"\"" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.620182 4727 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.620197 4727 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.620214 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.620229 4727 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.620244 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/945a8b1a-2ae7-449a-b7b5-206b435b2d19-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.973820 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" event={"ID":"945a8b1a-2ae7-449a-b7b5-206b435b2d19","Type":"ContainerDied","Data":"9188111b12bc2dd92deb0ec432323597e8013dedc6d70b039ae987dfad2b71ad"} Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.973861 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9188111b12bc2dd92deb0ec432323597e8013dedc6d70b039ae987dfad2b71ad" Nov 21 20:43:30 crc kubenswrapper[4727]: I1121 20:43:30.973984 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.075299 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g"] Nov 21 20:43:31 crc kubenswrapper[4727]: E1121 20:43:31.076124 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945a8b1a-2ae7-449a-b7b5-206b435b2d19" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.076160 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="945a8b1a-2ae7-449a-b7b5-206b435b2d19" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.076645 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="945a8b1a-2ae7-449a-b7b5-206b435b2d19" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.078112 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.083318 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.083667 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.083905 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.084216 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.084422 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.089173 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g"] Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.133323 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w9np\" (UniqueName: \"kubernetes.io/projected/b1fa34fd-2887-416b-a02a-79424f936670-kube-api-access-5w9np\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.133549 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.133591 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.133786 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.133941 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.236048 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.236123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.236194 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.236270 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.236318 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w9np\" (UniqueName: \"kubernetes.io/projected/b1fa34fd-2887-416b-a02a-79424f936670-kube-api-access-5w9np\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.241982 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.242144 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.242203 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.242428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.261396 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w9np\" (UniqueName: \"kubernetes.io/projected/b1fa34fd-2887-416b-a02a-79424f936670-kube-api-access-5w9np\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-42g5g\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.414789 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.961510 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g"] Nov 21 20:43:31 crc kubenswrapper[4727]: W1121 20:43:31.967212 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1fa34fd_2887_416b_a02a_79424f936670.slice/crio-714c86ea97ddfbf14669a99465f71e42645e96dd8aa9a4a27d1165ef0e57fb1a WatchSource:0}: Error finding container 714c86ea97ddfbf14669a99465f71e42645e96dd8aa9a4a27d1165ef0e57fb1a: Status 404 returned error can't find the container with id 714c86ea97ddfbf14669a99465f71e42645e96dd8aa9a4a27d1165ef0e57fb1a Nov 21 20:43:31 crc kubenswrapper[4727]: I1121 20:43:31.985408 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" event={"ID":"b1fa34fd-2887-416b-a02a-79424f936670","Type":"ContainerStarted","Data":"714c86ea97ddfbf14669a99465f71e42645e96dd8aa9a4a27d1165ef0e57fb1a"} Nov 21 20:43:32 crc kubenswrapper[4727]: I1121 20:43:32.997868 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" event={"ID":"b1fa34fd-2887-416b-a02a-79424f936670","Type":"ContainerStarted","Data":"b6d86fe1200f5afca693a74fc23b0572d4da74a2ec44f20eaa1792803f304d27"} Nov 21 20:43:33 crc kubenswrapper[4727]: I1121 20:43:33.025812 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" podStartSLOduration=1.599032528 podStartE2EDuration="2.025790145s" podCreationTimestamp="2025-11-21 20:43:31 +0000 UTC" firstStartedPulling="2025-11-21 20:43:31.96992091 +0000 UTC m=+2217.156105954" lastFinishedPulling="2025-11-21 20:43:32.396678527 +0000 UTC m=+2217.582863571" observedRunningTime="2025-11-21 20:43:33.013748133 +0000 UTC m=+2218.199933187" watchObservedRunningTime="2025-11-21 20:43:33.025790145 +0000 UTC m=+2218.211975179" Nov 21 20:43:43 crc kubenswrapper[4727]: I1121 20:43:43.335410 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:43:43 crc kubenswrapper[4727]: I1121 20:43:43.335983 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:44:13 crc kubenswrapper[4727]: I1121 20:44:13.335714 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:44:13 crc kubenswrapper[4727]: I1121 20:44:13.336392 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.007541 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vt74w"] Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.010928 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.027063 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vt74w"] Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.117056 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-catalog-content\") pod \"redhat-marketplace-vt74w\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.117471 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-utilities\") pod \"redhat-marketplace-vt74w\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.117652 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwp8x\" (UniqueName: \"kubernetes.io/projected/5a746e2f-c277-4d70-b5fc-c187339e4171-kube-api-access-hwp8x\") pod \"redhat-marketplace-vt74w\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.219700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-catalog-content\") pod \"redhat-marketplace-vt74w\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.219777 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-utilities\") pod \"redhat-marketplace-vt74w\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.219809 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwp8x\" (UniqueName: \"kubernetes.io/projected/5a746e2f-c277-4d70-b5fc-c187339e4171-kube-api-access-hwp8x\") pod \"redhat-marketplace-vt74w\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.220514 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-utilities\") pod \"redhat-marketplace-vt74w\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.220890 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-catalog-content\") pod \"redhat-marketplace-vt74w\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.240464 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwp8x\" (UniqueName: \"kubernetes.io/projected/5a746e2f-c277-4d70-b5fc-c187339e4171-kube-api-access-hwp8x\") pod \"redhat-marketplace-vt74w\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.363898 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:20 crc kubenswrapper[4727]: I1121 20:44:20.863932 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vt74w"] Nov 21 20:44:20 crc kubenswrapper[4727]: W1121 20:44:20.869139 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a746e2f_c277_4d70_b5fc_c187339e4171.slice/crio-4b8e3bbcdd5eab8e9580af4745c835e7b5878b8f0d74580718a47795ef5d1252 WatchSource:0}: Error finding container 4b8e3bbcdd5eab8e9580af4745c835e7b5878b8f0d74580718a47795ef5d1252: Status 404 returned error can't find the container with id 4b8e3bbcdd5eab8e9580af4745c835e7b5878b8f0d74580718a47795ef5d1252 Nov 21 20:44:21 crc kubenswrapper[4727]: I1121 20:44:21.519691 4727 generic.go:334] "Generic (PLEG): container finished" podID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerID="3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb" exitCode=0 Nov 21 20:44:21 crc kubenswrapper[4727]: I1121 20:44:21.519735 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vt74w" event={"ID":"5a746e2f-c277-4d70-b5fc-c187339e4171","Type":"ContainerDied","Data":"3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb"} Nov 21 20:44:21 crc kubenswrapper[4727]: I1121 20:44:21.519764 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vt74w" event={"ID":"5a746e2f-c277-4d70-b5fc-c187339e4171","Type":"ContainerStarted","Data":"4b8e3bbcdd5eab8e9580af4745c835e7b5878b8f0d74580718a47795ef5d1252"} Nov 21 20:44:22 crc kubenswrapper[4727]: I1121 20:44:22.531107 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vt74w" event={"ID":"5a746e2f-c277-4d70-b5fc-c187339e4171","Type":"ContainerStarted","Data":"9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d"} Nov 21 20:44:23 crc kubenswrapper[4727]: I1121 20:44:23.544193 4727 generic.go:334] "Generic (PLEG): container finished" podID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerID="9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d" exitCode=0 Nov 21 20:44:23 crc kubenswrapper[4727]: I1121 20:44:23.544280 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vt74w" event={"ID":"5a746e2f-c277-4d70-b5fc-c187339e4171","Type":"ContainerDied","Data":"9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d"} Nov 21 20:44:24 crc kubenswrapper[4727]: I1121 20:44:24.561687 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vt74w" event={"ID":"5a746e2f-c277-4d70-b5fc-c187339e4171","Type":"ContainerStarted","Data":"ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab"} Nov 21 20:44:24 crc kubenswrapper[4727]: I1121 20:44:24.594015 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vt74w" podStartSLOduration=3.136903553 podStartE2EDuration="5.593987965s" podCreationTimestamp="2025-11-21 20:44:19 +0000 UTC" firstStartedPulling="2025-11-21 20:44:21.521636852 +0000 UTC m=+2266.707821896" lastFinishedPulling="2025-11-21 20:44:23.978721254 +0000 UTC m=+2269.164906308" observedRunningTime="2025-11-21 20:44:24.581664796 +0000 UTC m=+2269.767849860" watchObservedRunningTime="2025-11-21 20:44:24.593987965 +0000 UTC m=+2269.780173019" Nov 21 20:44:30 crc kubenswrapper[4727]: I1121 20:44:30.364493 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:30 crc kubenswrapper[4727]: I1121 20:44:30.365732 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:30 crc kubenswrapper[4727]: I1121 20:44:30.436121 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:30 crc kubenswrapper[4727]: I1121 20:44:30.701023 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:32 crc kubenswrapper[4727]: I1121 20:44:32.188474 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vt74w"] Nov 21 20:44:32 crc kubenswrapper[4727]: I1121 20:44:32.664115 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vt74w" podUID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerName="registry-server" containerID="cri-o://ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab" gracePeriod=2 Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.180469 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.273174 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwp8x\" (UniqueName: \"kubernetes.io/projected/5a746e2f-c277-4d70-b5fc-c187339e4171-kube-api-access-hwp8x\") pod \"5a746e2f-c277-4d70-b5fc-c187339e4171\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.273232 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-catalog-content\") pod \"5a746e2f-c277-4d70-b5fc-c187339e4171\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.273495 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-utilities\") pod \"5a746e2f-c277-4d70-b5fc-c187339e4171\" (UID: \"5a746e2f-c277-4d70-b5fc-c187339e4171\") " Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.277079 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-utilities" (OuterVolumeSpecName: "utilities") pod "5a746e2f-c277-4d70-b5fc-c187339e4171" (UID: "5a746e2f-c277-4d70-b5fc-c187339e4171"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.280357 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a746e2f-c277-4d70-b5fc-c187339e4171-kube-api-access-hwp8x" (OuterVolumeSpecName: "kube-api-access-hwp8x") pod "5a746e2f-c277-4d70-b5fc-c187339e4171" (UID: "5a746e2f-c277-4d70-b5fc-c187339e4171"). InnerVolumeSpecName "kube-api-access-hwp8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.292043 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a746e2f-c277-4d70-b5fc-c187339e4171" (UID: "5a746e2f-c277-4d70-b5fc-c187339e4171"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.378476 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwp8x\" (UniqueName: \"kubernetes.io/projected/5a746e2f-c277-4d70-b5fc-c187339e4171-kube-api-access-hwp8x\") on node \"crc\" DevicePath \"\"" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.378512 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.378523 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a746e2f-c277-4d70-b5fc-c187339e4171-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.678892 4727 generic.go:334] "Generic (PLEG): container finished" podID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerID="ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab" exitCode=0 Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.678995 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vt74w" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.679001 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vt74w" event={"ID":"5a746e2f-c277-4d70-b5fc-c187339e4171","Type":"ContainerDied","Data":"ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab"} Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.679699 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vt74w" event={"ID":"5a746e2f-c277-4d70-b5fc-c187339e4171","Type":"ContainerDied","Data":"4b8e3bbcdd5eab8e9580af4745c835e7b5878b8f0d74580718a47795ef5d1252"} Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.679735 4727 scope.go:117] "RemoveContainer" containerID="ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.713162 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vt74w"] Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.716905 4727 scope.go:117] "RemoveContainer" containerID="9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.725247 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vt74w"] Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.747900 4727 scope.go:117] "RemoveContainer" containerID="3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.835542 4727 scope.go:117] "RemoveContainer" containerID="ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab" Nov 21 20:44:33 crc kubenswrapper[4727]: E1121 20:44:33.836149 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab\": container with ID starting with ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab not found: ID does not exist" containerID="ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.836203 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab"} err="failed to get container status \"ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab\": rpc error: code = NotFound desc = could not find container \"ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab\": container with ID starting with ffcb6b77ffb36404113dcf00885cbc16c61c52d968ff9471265f8c7089edffab not found: ID does not exist" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.836242 4727 scope.go:117] "RemoveContainer" containerID="9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d" Nov 21 20:44:33 crc kubenswrapper[4727]: E1121 20:44:33.836600 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d\": container with ID starting with 9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d not found: ID does not exist" containerID="9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.836735 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d"} err="failed to get container status \"9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d\": rpc error: code = NotFound desc = could not find container \"9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d\": container with ID starting with 9fe27ddc9122997a38c3e52f249a83ec5ddc575986e37e7e6b268eb7dd727c3d not found: ID does not exist" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.836843 4727 scope.go:117] "RemoveContainer" containerID="3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb" Nov 21 20:44:33 crc kubenswrapper[4727]: E1121 20:44:33.837316 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb\": container with ID starting with 3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb not found: ID does not exist" containerID="3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb" Nov 21 20:44:33 crc kubenswrapper[4727]: I1121 20:44:33.837350 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb"} err="failed to get container status \"3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb\": rpc error: code = NotFound desc = could not find container \"3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb\": container with ID starting with 3d51f482af35b8e091ea81d5f3b66faf26bced920ee69f9ffcc08eb71ee78abb not found: ID does not exist" Nov 21 20:44:35 crc kubenswrapper[4727]: I1121 20:44:35.522887 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a746e2f-c277-4d70-b5fc-c187339e4171" path="/var/lib/kubelet/pods/5a746e2f-c277-4d70-b5fc-c187339e4171/volumes" Nov 21 20:44:43 crc kubenswrapper[4727]: I1121 20:44:43.336025 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:44:43 crc kubenswrapper[4727]: I1121 20:44:43.336677 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:44:43 crc kubenswrapper[4727]: I1121 20:44:43.336745 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:44:43 crc kubenswrapper[4727]: I1121 20:44:43.338106 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:44:43 crc kubenswrapper[4727]: I1121 20:44:43.338221 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" gracePeriod=600 Nov 21 20:44:43 crc kubenswrapper[4727]: E1121 20:44:43.488313 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:44:43 crc kubenswrapper[4727]: I1121 20:44:43.818437 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" exitCode=0 Nov 21 20:44:43 crc kubenswrapper[4727]: I1121 20:44:43.818477 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0"} Nov 21 20:44:43 crc kubenswrapper[4727]: I1121 20:44:43.818829 4727 scope.go:117] "RemoveContainer" containerID="4261fd6f25fa69562f5c96f8f8cb06e32873a62c894efb833ccfecb95710b418" Nov 21 20:44:43 crc kubenswrapper[4727]: I1121 20:44:43.819912 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:44:43 crc kubenswrapper[4727]: E1121 20:44:43.820485 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:44:56 crc kubenswrapper[4727]: I1121 20:44:56.500808 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:44:56 crc kubenswrapper[4727]: E1121 20:44:56.502340 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.143520 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn"] Nov 21 20:45:00 crc kubenswrapper[4727]: E1121 20:45:00.144660 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerName="extract-utilities" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.144678 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerName="extract-utilities" Nov 21 20:45:00 crc kubenswrapper[4727]: E1121 20:45:00.144693 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerName="registry-server" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.144701 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerName="registry-server" Nov 21 20:45:00 crc kubenswrapper[4727]: E1121 20:45:00.144721 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerName="extract-content" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.144728 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerName="extract-content" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.145038 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a746e2f-c277-4d70-b5fc-c187339e4171" containerName="registry-server" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.145873 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.151225 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.152613 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.159907 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn"] Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.230747 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjx8\" (UniqueName: \"kubernetes.io/projected/81419a0f-aa23-4559-b050-c4f71ab63409-kube-api-access-cjjx8\") pod \"collect-profiles-29395965-v8xnn\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.230993 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81419a0f-aa23-4559-b050-c4f71ab63409-config-volume\") pod \"collect-profiles-29395965-v8xnn\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.231206 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81419a0f-aa23-4559-b050-c4f71ab63409-secret-volume\") pod \"collect-profiles-29395965-v8xnn\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.336325 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjjx8\" (UniqueName: \"kubernetes.io/projected/81419a0f-aa23-4559-b050-c4f71ab63409-kube-api-access-cjjx8\") pod \"collect-profiles-29395965-v8xnn\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.336517 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81419a0f-aa23-4559-b050-c4f71ab63409-config-volume\") pod \"collect-profiles-29395965-v8xnn\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.337558 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81419a0f-aa23-4559-b050-c4f71ab63409-secret-volume\") pod \"collect-profiles-29395965-v8xnn\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.341192 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81419a0f-aa23-4559-b050-c4f71ab63409-config-volume\") pod \"collect-profiles-29395965-v8xnn\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.347084 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81419a0f-aa23-4559-b050-c4f71ab63409-secret-volume\") pod \"collect-profiles-29395965-v8xnn\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.364744 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjjx8\" (UniqueName: \"kubernetes.io/projected/81419a0f-aa23-4559-b050-c4f71ab63409-kube-api-access-cjjx8\") pod \"collect-profiles-29395965-v8xnn\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.475114 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:00 crc kubenswrapper[4727]: I1121 20:45:00.935486 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn"] Nov 21 20:45:01 crc kubenswrapper[4727]: I1121 20:45:01.039049 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" event={"ID":"81419a0f-aa23-4559-b050-c4f71ab63409","Type":"ContainerStarted","Data":"50718b00ac8ea0d179f4c097549ea7859ab5d9f43a2e9435237e696faef4b581"} Nov 21 20:45:02 crc kubenswrapper[4727]: I1121 20:45:02.049582 4727 generic.go:334] "Generic (PLEG): container finished" podID="81419a0f-aa23-4559-b050-c4f71ab63409" containerID="323331ab81ebc0887d64bf4ab610ddf4241a9396b2472e3ad021f91a16698847" exitCode=0 Nov 21 20:45:02 crc kubenswrapper[4727]: I1121 20:45:02.049679 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" event={"ID":"81419a0f-aa23-4559-b050-c4f71ab63409","Type":"ContainerDied","Data":"323331ab81ebc0887d64bf4ab610ddf4241a9396b2472e3ad021f91a16698847"} Nov 21 20:45:03 crc kubenswrapper[4727]: I1121 20:45:03.451679 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:03 crc kubenswrapper[4727]: I1121 20:45:03.622540 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjjx8\" (UniqueName: \"kubernetes.io/projected/81419a0f-aa23-4559-b050-c4f71ab63409-kube-api-access-cjjx8\") pod \"81419a0f-aa23-4559-b050-c4f71ab63409\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " Nov 21 20:45:03 crc kubenswrapper[4727]: I1121 20:45:03.622604 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81419a0f-aa23-4559-b050-c4f71ab63409-secret-volume\") pod \"81419a0f-aa23-4559-b050-c4f71ab63409\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " Nov 21 20:45:03 crc kubenswrapper[4727]: I1121 20:45:03.622625 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81419a0f-aa23-4559-b050-c4f71ab63409-config-volume\") pod \"81419a0f-aa23-4559-b050-c4f71ab63409\" (UID: \"81419a0f-aa23-4559-b050-c4f71ab63409\") " Nov 21 20:45:03 crc kubenswrapper[4727]: I1121 20:45:03.623770 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81419a0f-aa23-4559-b050-c4f71ab63409-config-volume" (OuterVolumeSpecName: "config-volume") pod "81419a0f-aa23-4559-b050-c4f71ab63409" (UID: "81419a0f-aa23-4559-b050-c4f71ab63409"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:45:03 crc kubenswrapper[4727]: I1121 20:45:03.632072 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81419a0f-aa23-4559-b050-c4f71ab63409-kube-api-access-cjjx8" (OuterVolumeSpecName: "kube-api-access-cjjx8") pod "81419a0f-aa23-4559-b050-c4f71ab63409" (UID: "81419a0f-aa23-4559-b050-c4f71ab63409"). InnerVolumeSpecName "kube-api-access-cjjx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:45:03 crc kubenswrapper[4727]: I1121 20:45:03.632183 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81419a0f-aa23-4559-b050-c4f71ab63409-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81419a0f-aa23-4559-b050-c4f71ab63409" (UID: "81419a0f-aa23-4559-b050-c4f71ab63409"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:45:03 crc kubenswrapper[4727]: I1121 20:45:03.725514 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81419a0f-aa23-4559-b050-c4f71ab63409-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 20:45:03 crc kubenswrapper[4727]: I1121 20:45:03.725553 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81419a0f-aa23-4559-b050-c4f71ab63409-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 20:45:03 crc kubenswrapper[4727]: I1121 20:45:03.725564 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjjx8\" (UniqueName: \"kubernetes.io/projected/81419a0f-aa23-4559-b050-c4f71ab63409-kube-api-access-cjjx8\") on node \"crc\" DevicePath \"\"" Nov 21 20:45:04 crc kubenswrapper[4727]: I1121 20:45:04.077437 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" event={"ID":"81419a0f-aa23-4559-b050-c4f71ab63409","Type":"ContainerDied","Data":"50718b00ac8ea0d179f4c097549ea7859ab5d9f43a2e9435237e696faef4b581"} Nov 21 20:45:04 crc kubenswrapper[4727]: I1121 20:45:04.077489 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50718b00ac8ea0d179f4c097549ea7859ab5d9f43a2e9435237e696faef4b581" Nov 21 20:45:04 crc kubenswrapper[4727]: I1121 20:45:04.077500 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn" Nov 21 20:45:04 crc kubenswrapper[4727]: I1121 20:45:04.541299 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg"] Nov 21 20:45:04 crc kubenswrapper[4727]: I1121 20:45:04.551725 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395920-d94jg"] Nov 21 20:45:05 crc kubenswrapper[4727]: I1121 20:45:05.514689 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f7a69e-e7d1-4048-8263-52cfefbc90d5" path="/var/lib/kubelet/pods/d9f7a69e-e7d1-4048-8263-52cfefbc90d5/volumes" Nov 21 20:45:07 crc kubenswrapper[4727]: I1121 20:45:07.499436 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:45:07 crc kubenswrapper[4727]: E1121 20:45:07.500366 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:45:18 crc kubenswrapper[4727]: I1121 20:45:18.499930 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:45:18 crc kubenswrapper[4727]: E1121 20:45:18.501400 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:45:32 crc kubenswrapper[4727]: I1121 20:45:32.499598 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:45:32 crc kubenswrapper[4727]: E1121 20:45:32.500471 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:45:43 crc kubenswrapper[4727]: I1121 20:45:43.500062 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:45:43 crc kubenswrapper[4727]: E1121 20:45:43.501165 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:45:47 crc kubenswrapper[4727]: I1121 20:45:47.003258 4727 scope.go:117] "RemoveContainer" containerID="5a3e523b6cd9a2cb7bc0497a09157dcb8ebd0e88e784e1b16c41e9322d2ce3af" Nov 21 20:45:55 crc kubenswrapper[4727]: I1121 20:45:55.507585 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:45:55 crc kubenswrapper[4727]: E1121 20:45:55.508394 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:46:09 crc kubenswrapper[4727]: I1121 20:46:09.501208 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:46:09 crc kubenswrapper[4727]: E1121 20:46:09.502081 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:46:24 crc kubenswrapper[4727]: I1121 20:46:24.499254 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:46:24 crc kubenswrapper[4727]: E1121 20:46:24.499999 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:46:26 crc kubenswrapper[4727]: I1121 20:46:26.830548 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z26sk"] Nov 21 20:46:26 crc kubenswrapper[4727]: E1121 20:46:26.832747 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81419a0f-aa23-4559-b050-c4f71ab63409" containerName="collect-profiles" Nov 21 20:46:26 crc kubenswrapper[4727]: I1121 20:46:26.832783 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="81419a0f-aa23-4559-b050-c4f71ab63409" containerName="collect-profiles" Nov 21 20:46:26 crc kubenswrapper[4727]: I1121 20:46:26.833387 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="81419a0f-aa23-4559-b050-c4f71ab63409" containerName="collect-profiles" Nov 21 20:46:26 crc kubenswrapper[4727]: I1121 20:46:26.837390 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:26 crc kubenswrapper[4727]: I1121 20:46:26.841498 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z26sk"] Nov 21 20:46:26 crc kubenswrapper[4727]: I1121 20:46:26.967359 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-catalog-content\") pod \"redhat-operators-z26sk\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:26 crc kubenswrapper[4727]: I1121 20:46:26.967478 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b8gt\" (UniqueName: \"kubernetes.io/projected/24660323-baa8-4e69-b88f-67dce0e4f8f5-kube-api-access-2b8gt\") pod \"redhat-operators-z26sk\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:26 crc kubenswrapper[4727]: I1121 20:46:26.967525 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-utilities\") pod \"redhat-operators-z26sk\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:27 crc kubenswrapper[4727]: I1121 20:46:27.069191 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-catalog-content\") pod \"redhat-operators-z26sk\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:27 crc kubenswrapper[4727]: I1121 20:46:27.069297 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b8gt\" (UniqueName: \"kubernetes.io/projected/24660323-baa8-4e69-b88f-67dce0e4f8f5-kube-api-access-2b8gt\") pod \"redhat-operators-z26sk\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:27 crc kubenswrapper[4727]: I1121 20:46:27.069338 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-utilities\") pod \"redhat-operators-z26sk\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:27 crc kubenswrapper[4727]: I1121 20:46:27.069693 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-catalog-content\") pod \"redhat-operators-z26sk\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:27 crc kubenswrapper[4727]: I1121 20:46:27.070088 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-utilities\") pod \"redhat-operators-z26sk\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:27 crc kubenswrapper[4727]: I1121 20:46:27.104181 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b8gt\" (UniqueName: \"kubernetes.io/projected/24660323-baa8-4e69-b88f-67dce0e4f8f5-kube-api-access-2b8gt\") pod \"redhat-operators-z26sk\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:27 crc kubenswrapper[4727]: I1121 20:46:27.175801 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:27 crc kubenswrapper[4727]: I1121 20:46:27.727589 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z26sk"] Nov 21 20:46:28 crc kubenswrapper[4727]: I1121 20:46:28.008834 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26sk" event={"ID":"24660323-baa8-4e69-b88f-67dce0e4f8f5","Type":"ContainerStarted","Data":"d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0"} Nov 21 20:46:28 crc kubenswrapper[4727]: I1121 20:46:28.009302 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26sk" event={"ID":"24660323-baa8-4e69-b88f-67dce0e4f8f5","Type":"ContainerStarted","Data":"13ce9d5fcc0b72476f07ae1dd527a5a70d0a6a0267c4f5726e4cf3c262d94c36"} Nov 21 20:46:29 crc kubenswrapper[4727]: I1121 20:46:29.022976 4727 generic.go:334] "Generic (PLEG): container finished" podID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerID="d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0" exitCode=0 Nov 21 20:46:29 crc kubenswrapper[4727]: I1121 20:46:29.023429 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26sk" event={"ID":"24660323-baa8-4e69-b88f-67dce0e4f8f5","Type":"ContainerDied","Data":"d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0"} Nov 21 20:46:30 crc kubenswrapper[4727]: I1121 20:46:30.035557 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26sk" event={"ID":"24660323-baa8-4e69-b88f-67dce0e4f8f5","Type":"ContainerStarted","Data":"748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f"} Nov 21 20:46:33 crc kubenswrapper[4727]: I1121 20:46:33.067066 4727 generic.go:334] "Generic (PLEG): container finished" podID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerID="748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f" exitCode=0 Nov 21 20:46:33 crc kubenswrapper[4727]: I1121 20:46:33.067120 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26sk" event={"ID":"24660323-baa8-4e69-b88f-67dce0e4f8f5","Type":"ContainerDied","Data":"748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f"} Nov 21 20:46:34 crc kubenswrapper[4727]: I1121 20:46:34.080560 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26sk" event={"ID":"24660323-baa8-4e69-b88f-67dce0e4f8f5","Type":"ContainerStarted","Data":"f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f"} Nov 21 20:46:34 crc kubenswrapper[4727]: I1121 20:46:34.110548 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z26sk" podStartSLOduration=2.655032526 podStartE2EDuration="8.110529691s" podCreationTimestamp="2025-11-21 20:46:26 +0000 UTC" firstStartedPulling="2025-11-21 20:46:28.013012708 +0000 UTC m=+2393.199197762" lastFinishedPulling="2025-11-21 20:46:33.468509883 +0000 UTC m=+2398.654694927" observedRunningTime="2025-11-21 20:46:34.1088317 +0000 UTC m=+2399.295016744" watchObservedRunningTime="2025-11-21 20:46:34.110529691 +0000 UTC m=+2399.296714735" Nov 21 20:46:35 crc kubenswrapper[4727]: I1121 20:46:35.509943 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:46:35 crc kubenswrapper[4727]: E1121 20:46:35.510604 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:46:37 crc kubenswrapper[4727]: I1121 20:46:37.176953 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:37 crc kubenswrapper[4727]: I1121 20:46:37.177513 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:38 crc kubenswrapper[4727]: I1121 20:46:38.223420 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z26sk" podUID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerName="registry-server" probeResult="failure" output=< Nov 21 20:46:38 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 20:46:38 crc kubenswrapper[4727]: > Nov 21 20:46:43 crc kubenswrapper[4727]: I1121 20:46:43.951307 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5zw8r"] Nov 21 20:46:43 crc kubenswrapper[4727]: I1121 20:46:43.955045 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:43 crc kubenswrapper[4727]: I1121 20:46:43.965378 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5zw8r"] Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.008065 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-catalog-content\") pod \"certified-operators-5zw8r\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.008113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6xd7\" (UniqueName: \"kubernetes.io/projected/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-kube-api-access-m6xd7\") pod \"certified-operators-5zw8r\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.008193 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-utilities\") pod \"certified-operators-5zw8r\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.110264 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-catalog-content\") pod \"certified-operators-5zw8r\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.110333 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6xd7\" (UniqueName: \"kubernetes.io/projected/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-kube-api-access-m6xd7\") pod \"certified-operators-5zw8r\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.110471 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-utilities\") pod \"certified-operators-5zw8r\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.110811 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-catalog-content\") pod \"certified-operators-5zw8r\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.111084 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-utilities\") pod \"certified-operators-5zw8r\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.128914 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6xd7\" (UniqueName: \"kubernetes.io/projected/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-kube-api-access-m6xd7\") pod \"certified-operators-5zw8r\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.279589 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:44 crc kubenswrapper[4727]: I1121 20:46:44.789552 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5zw8r"] Nov 21 20:46:45 crc kubenswrapper[4727]: I1121 20:46:45.188619 4727 generic.go:334] "Generic (PLEG): container finished" podID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerID="66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730" exitCode=0 Nov 21 20:46:45 crc kubenswrapper[4727]: I1121 20:46:45.188847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zw8r" event={"ID":"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29","Type":"ContainerDied","Data":"66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730"} Nov 21 20:46:45 crc kubenswrapper[4727]: I1121 20:46:45.189019 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zw8r" event={"ID":"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29","Type":"ContainerStarted","Data":"e3c39cc34bf8f62400ff1e445ec1ccc85d581cd51db5fbc668acdb8c0998b075"} Nov 21 20:46:46 crc kubenswrapper[4727]: I1121 20:46:46.202152 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zw8r" event={"ID":"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29","Type":"ContainerStarted","Data":"edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42"} Nov 21 20:46:46 crc kubenswrapper[4727]: I1121 20:46:46.518559 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:46:46 crc kubenswrapper[4727]: E1121 20:46:46.519361 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:46:47 crc kubenswrapper[4727]: I1121 20:46:47.241917 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:47 crc kubenswrapper[4727]: I1121 20:46:47.302595 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:48 crc kubenswrapper[4727]: I1121 20:46:48.224660 4727 generic.go:334] "Generic (PLEG): container finished" podID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerID="edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42" exitCode=0 Nov 21 20:46:48 crc kubenswrapper[4727]: I1121 20:46:48.224705 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zw8r" event={"ID":"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29","Type":"ContainerDied","Data":"edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42"} Nov 21 20:46:49 crc kubenswrapper[4727]: I1121 20:46:49.247140 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zw8r" event={"ID":"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29","Type":"ContainerStarted","Data":"9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb"} Nov 21 20:46:49 crc kubenswrapper[4727]: I1121 20:46:49.277823 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5zw8r" podStartSLOduration=2.8006965729999997 podStartE2EDuration="6.277805045s" podCreationTimestamp="2025-11-21 20:46:43 +0000 UTC" firstStartedPulling="2025-11-21 20:46:45.191766587 +0000 UTC m=+2410.377951631" lastFinishedPulling="2025-11-21 20:46:48.668875059 +0000 UTC m=+2413.855060103" observedRunningTime="2025-11-21 20:46:49.275233833 +0000 UTC m=+2414.461418887" watchObservedRunningTime="2025-11-21 20:46:49.277805045 +0000 UTC m=+2414.463990079" Nov 21 20:46:49 crc kubenswrapper[4727]: I1121 20:46:49.587414 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z26sk"] Nov 21 20:46:49 crc kubenswrapper[4727]: I1121 20:46:49.587616 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z26sk" podUID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerName="registry-server" containerID="cri-o://f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f" gracePeriod=2 Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.122922 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.257564 4727 generic.go:334] "Generic (PLEG): container finished" podID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerID="f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f" exitCode=0 Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.257766 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26sk" event={"ID":"24660323-baa8-4e69-b88f-67dce0e4f8f5","Type":"ContainerDied","Data":"f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f"} Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.257832 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26sk" event={"ID":"24660323-baa8-4e69-b88f-67dce0e4f8f5","Type":"ContainerDied","Data":"13ce9d5fcc0b72476f07ae1dd527a5a70d0a6a0267c4f5726e4cf3c262d94c36"} Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.257854 4727 scope.go:117] "RemoveContainer" containerID="f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.257882 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z26sk" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.283601 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-catalog-content\") pod \"24660323-baa8-4e69-b88f-67dce0e4f8f5\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.283728 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b8gt\" (UniqueName: \"kubernetes.io/projected/24660323-baa8-4e69-b88f-67dce0e4f8f5-kube-api-access-2b8gt\") pod \"24660323-baa8-4e69-b88f-67dce0e4f8f5\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.283769 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-utilities\") pod \"24660323-baa8-4e69-b88f-67dce0e4f8f5\" (UID: \"24660323-baa8-4e69-b88f-67dce0e4f8f5\") " Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.283936 4727 scope.go:117] "RemoveContainer" containerID="748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.285119 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-utilities" (OuterVolumeSpecName: "utilities") pod "24660323-baa8-4e69-b88f-67dce0e4f8f5" (UID: "24660323-baa8-4e69-b88f-67dce0e4f8f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.290113 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24660323-baa8-4e69-b88f-67dce0e4f8f5-kube-api-access-2b8gt" (OuterVolumeSpecName: "kube-api-access-2b8gt") pod "24660323-baa8-4e69-b88f-67dce0e4f8f5" (UID: "24660323-baa8-4e69-b88f-67dce0e4f8f5"). InnerVolumeSpecName "kube-api-access-2b8gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.355204 4727 scope.go:117] "RemoveContainer" containerID="d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.379272 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24660323-baa8-4e69-b88f-67dce0e4f8f5" (UID: "24660323-baa8-4e69-b88f-67dce0e4f8f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.386146 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.386184 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b8gt\" (UniqueName: \"kubernetes.io/projected/24660323-baa8-4e69-b88f-67dce0e4f8f5-kube-api-access-2b8gt\") on node \"crc\" DevicePath \"\"" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.386196 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24660323-baa8-4e69-b88f-67dce0e4f8f5-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.398973 4727 scope.go:117] "RemoveContainer" containerID="f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f" Nov 21 20:46:50 crc kubenswrapper[4727]: E1121 20:46:50.399410 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f\": container with ID starting with f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f not found: ID does not exist" containerID="f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.399446 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f"} err="failed to get container status \"f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f\": rpc error: code = NotFound desc = could not find container \"f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f\": container with ID starting with f83557ae25f3c64a0daaf62ac5f19fdc3905aa1ff0a668a2f53ab9610d3abf3f not found: ID does not exist" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.399471 4727 scope.go:117] "RemoveContainer" containerID="748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f" Nov 21 20:46:50 crc kubenswrapper[4727]: E1121 20:46:50.399860 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f\": container with ID starting with 748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f not found: ID does not exist" containerID="748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.399905 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f"} err="failed to get container status \"748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f\": rpc error: code = NotFound desc = could not find container \"748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f\": container with ID starting with 748fad69133d91d679305b418e7b9748a35670952511fbe8508d392a2e4aa98f not found: ID does not exist" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.399927 4727 scope.go:117] "RemoveContainer" containerID="d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0" Nov 21 20:46:50 crc kubenswrapper[4727]: E1121 20:46:50.400281 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0\": container with ID starting with d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0 not found: ID does not exist" containerID="d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.400310 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0"} err="failed to get container status \"d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0\": rpc error: code = NotFound desc = could not find container \"d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0\": container with ID starting with d1b92dfad5cabac65419448c797087eb41bafe5adaa046714cc68f5ef65e4bf0 not found: ID does not exist" Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.591107 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z26sk"] Nov 21 20:46:50 crc kubenswrapper[4727]: I1121 20:46:50.599883 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z26sk"] Nov 21 20:46:51 crc kubenswrapper[4727]: I1121 20:46:51.520740 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24660323-baa8-4e69-b88f-67dce0e4f8f5" path="/var/lib/kubelet/pods/24660323-baa8-4e69-b88f-67dce0e4f8f5/volumes" Nov 21 20:46:54 crc kubenswrapper[4727]: I1121 20:46:54.280406 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:54 crc kubenswrapper[4727]: I1121 20:46:54.281642 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:54 crc kubenswrapper[4727]: I1121 20:46:54.339788 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:54 crc kubenswrapper[4727]: I1121 20:46:54.392972 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:55 crc kubenswrapper[4727]: I1121 20:46:55.536852 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5zw8r"] Nov 21 20:46:56 crc kubenswrapper[4727]: I1121 20:46:56.331520 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5zw8r" podUID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerName="registry-server" containerID="cri-o://9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb" gracePeriod=2 Nov 21 20:46:56 crc kubenswrapper[4727]: I1121 20:46:56.950140 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.145218 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6xd7\" (UniqueName: \"kubernetes.io/projected/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-kube-api-access-m6xd7\") pod \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.145301 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-utilities\") pod \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.145344 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-catalog-content\") pod \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\" (UID: \"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29\") " Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.146363 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-utilities" (OuterVolumeSpecName: "utilities") pod "9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" (UID: "9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.150989 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-kube-api-access-m6xd7" (OuterVolumeSpecName: "kube-api-access-m6xd7") pod "9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" (UID: "9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29"). InnerVolumeSpecName "kube-api-access-m6xd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.199863 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" (UID: "9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.248241 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6xd7\" (UniqueName: \"kubernetes.io/projected/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-kube-api-access-m6xd7\") on node \"crc\" DevicePath \"\"" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.248277 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.248291 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.344070 4727 generic.go:334] "Generic (PLEG): container finished" podID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerID="9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb" exitCode=0 Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.344112 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zw8r" event={"ID":"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29","Type":"ContainerDied","Data":"9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb"} Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.344137 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zw8r" event={"ID":"9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29","Type":"ContainerDied","Data":"e3c39cc34bf8f62400ff1e445ec1ccc85d581cd51db5fbc668acdb8c0998b075"} Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.344154 4727 scope.go:117] "RemoveContainer" containerID="9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.344273 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zw8r" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.381630 4727 scope.go:117] "RemoveContainer" containerID="edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.390010 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5zw8r"] Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.400067 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5zw8r"] Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.409150 4727 scope.go:117] "RemoveContainer" containerID="66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.476629 4727 scope.go:117] "RemoveContainer" containerID="9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb" Nov 21 20:46:57 crc kubenswrapper[4727]: E1121 20:46:57.477382 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb\": container with ID starting with 9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb not found: ID does not exist" containerID="9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.477422 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb"} err="failed to get container status \"9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb\": rpc error: code = NotFound desc = could not find container \"9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb\": container with ID starting with 9bdb12843b61eda0c151065d6e4dfadd62f3ac6013685e798527affa621650bb not found: ID does not exist" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.477453 4727 scope.go:117] "RemoveContainer" containerID="edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42" Nov 21 20:46:57 crc kubenswrapper[4727]: E1121 20:46:57.477975 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42\": container with ID starting with edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42 not found: ID does not exist" containerID="edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.478018 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42"} err="failed to get container status \"edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42\": rpc error: code = NotFound desc = could not find container \"edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42\": container with ID starting with edf22e7f5ae7a148c7c9360622da7e7e2e96b8f9c21ccca301d7e42b6cde7b42 not found: ID does not exist" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.478039 4727 scope.go:117] "RemoveContainer" containerID="66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730" Nov 21 20:46:57 crc kubenswrapper[4727]: E1121 20:46:57.478289 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730\": container with ID starting with 66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730 not found: ID does not exist" containerID="66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.478381 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730"} err="failed to get container status \"66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730\": rpc error: code = NotFound desc = could not find container \"66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730\": container with ID starting with 66d204fcdbe5e20f1654aa761690000fad3bbe6487067af8a736d45b7bb27730 not found: ID does not exist" Nov 21 20:46:57 crc kubenswrapper[4727]: I1121 20:46:57.512098 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" path="/var/lib/kubelet/pods/9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29/volumes" Nov 21 20:47:01 crc kubenswrapper[4727]: I1121 20:47:01.500493 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:47:01 crc kubenswrapper[4727]: E1121 20:47:01.501544 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:47:13 crc kubenswrapper[4727]: I1121 20:47:13.499042 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:47:13 crc kubenswrapper[4727]: E1121 20:47:13.500424 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:47:27 crc kubenswrapper[4727]: I1121 20:47:27.499197 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:47:27 crc kubenswrapper[4727]: E1121 20:47:27.500148 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:47:30 crc kubenswrapper[4727]: I1121 20:47:30.775900 4727 generic.go:334] "Generic (PLEG): container finished" podID="b1fa34fd-2887-416b-a02a-79424f936670" containerID="b6d86fe1200f5afca693a74fc23b0572d4da74a2ec44f20eaa1792803f304d27" exitCode=0 Nov 21 20:47:30 crc kubenswrapper[4727]: I1121 20:47:30.775992 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" event={"ID":"b1fa34fd-2887-416b-a02a-79424f936670","Type":"ContainerDied","Data":"b6d86fe1200f5afca693a74fc23b0572d4da74a2ec44f20eaa1792803f304d27"} Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.220346 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.256515 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w9np\" (UniqueName: \"kubernetes.io/projected/b1fa34fd-2887-416b-a02a-79424f936670-kube-api-access-5w9np\") pod \"b1fa34fd-2887-416b-a02a-79424f936670\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.256876 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-combined-ca-bundle\") pod \"b1fa34fd-2887-416b-a02a-79424f936670\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.256941 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-secret-0\") pod \"b1fa34fd-2887-416b-a02a-79424f936670\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.256988 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-inventory\") pod \"b1fa34fd-2887-416b-a02a-79424f936670\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.257050 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-ssh-key\") pod \"b1fa34fd-2887-416b-a02a-79424f936670\" (UID: \"b1fa34fd-2887-416b-a02a-79424f936670\") " Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.269132 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b1fa34fd-2887-416b-a02a-79424f936670" (UID: "b1fa34fd-2887-416b-a02a-79424f936670"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.269164 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1fa34fd-2887-416b-a02a-79424f936670-kube-api-access-5w9np" (OuterVolumeSpecName: "kube-api-access-5w9np") pod "b1fa34fd-2887-416b-a02a-79424f936670" (UID: "b1fa34fd-2887-416b-a02a-79424f936670"). InnerVolumeSpecName "kube-api-access-5w9np". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.292688 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b1fa34fd-2887-416b-a02a-79424f936670" (UID: "b1fa34fd-2887-416b-a02a-79424f936670"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.294450 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b1fa34fd-2887-416b-a02a-79424f936670" (UID: "b1fa34fd-2887-416b-a02a-79424f936670"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.296676 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-inventory" (OuterVolumeSpecName: "inventory") pod "b1fa34fd-2887-416b-a02a-79424f936670" (UID: "b1fa34fd-2887-416b-a02a-79424f936670"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.359796 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w9np\" (UniqueName: \"kubernetes.io/projected/b1fa34fd-2887-416b-a02a-79424f936670-kube-api-access-5w9np\") on node \"crc\" DevicePath \"\"" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.360901 4727 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.360929 4727 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.360945 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.360975 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1fa34fd-2887-416b-a02a-79424f936670-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.798111 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" event={"ID":"b1fa34fd-2887-416b-a02a-79424f936670","Type":"ContainerDied","Data":"714c86ea97ddfbf14669a99465f71e42645e96dd8aa9a4a27d1165ef0e57fb1a"} Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.798155 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="714c86ea97ddfbf14669a99465f71e42645e96dd8aa9a4a27d1165ef0e57fb1a" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.798206 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-42g5g" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.901492 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn"] Nov 21 20:47:32 crc kubenswrapper[4727]: E1121 20:47:32.902450 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerName="extract-content" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.902470 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerName="extract-content" Nov 21 20:47:32 crc kubenswrapper[4727]: E1121 20:47:32.902497 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerName="registry-server" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.902504 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerName="registry-server" Nov 21 20:47:32 crc kubenswrapper[4727]: E1121 20:47:32.902518 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerName="registry-server" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.902525 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerName="registry-server" Nov 21 20:47:32 crc kubenswrapper[4727]: E1121 20:47:32.902559 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerName="extract-utilities" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.902565 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerName="extract-utilities" Nov 21 20:47:32 crc kubenswrapper[4727]: E1121 20:47:32.902586 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerName="extract-utilities" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.902592 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerName="extract-utilities" Nov 21 20:47:32 crc kubenswrapper[4727]: E1121 20:47:32.902608 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerName="extract-content" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.902617 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerName="extract-content" Nov 21 20:47:32 crc kubenswrapper[4727]: E1121 20:47:32.902635 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1fa34fd-2887-416b-a02a-79424f936670" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.902642 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1fa34fd-2887-416b-a02a-79424f936670" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.903105 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1fa34fd-2887-416b-a02a-79424f936670" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.903134 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d09f972-8d1c-4c9c-9fb9-c8c72b2f1a29" containerName="registry-server" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.903159 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="24660323-baa8-4e69-b88f-67dce0e4f8f5" containerName="registry-server" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.904407 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.909520 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.909568 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.910827 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.912948 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.914197 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.916665 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.917426 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.937768 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn"] Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.974194 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.974269 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhnz4\" (UniqueName: \"kubernetes.io/projected/1265cd44-fdf4-434d-855e-375dcbb70601-kube-api-access-lhnz4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.974329 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.974378 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.974412 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.974437 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.974459 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.974501 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:32 crc kubenswrapper[4727]: I1121 20:47:32.974522 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1265cd44-fdf4-434d-855e-375dcbb70601-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.076228 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhnz4\" (UniqueName: \"kubernetes.io/projected/1265cd44-fdf4-434d-855e-375dcbb70601-kube-api-access-lhnz4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.076317 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.076397 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.076449 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.076480 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.076512 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.076565 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.076591 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1265cd44-fdf4-434d-855e-375dcbb70601-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.076684 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.078055 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1265cd44-fdf4-434d-855e-375dcbb70601-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.080493 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.081086 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.081376 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.082562 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.082625 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.082688 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.083081 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.094598 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhnz4\" (UniqueName: \"kubernetes.io/projected/1265cd44-fdf4-434d-855e-375dcbb70601-kube-api-access-lhnz4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6k6pn\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.238220 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:47:33 crc kubenswrapper[4727]: I1121 20:47:33.795216 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn"] Nov 21 20:47:33 crc kubenswrapper[4727]: W1121 20:47:33.805402 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1265cd44_fdf4_434d_855e_375dcbb70601.slice/crio-5086e1488919a7e66bd03aa42c971a6510000d1bf1f49439d49317c548bcc089 WatchSource:0}: Error finding container 5086e1488919a7e66bd03aa42c971a6510000d1bf1f49439d49317c548bcc089: Status 404 returned error can't find the container with id 5086e1488919a7e66bd03aa42c971a6510000d1bf1f49439d49317c548bcc089 Nov 21 20:47:34 crc kubenswrapper[4727]: I1121 20:47:34.818715 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" event={"ID":"1265cd44-fdf4-434d-855e-375dcbb70601","Type":"ContainerStarted","Data":"74f1cd80469d06feaabcf9819cc30a66f260d3174f11f38b4d7aa01cfb80ee31"} Nov 21 20:47:34 crc kubenswrapper[4727]: I1121 20:47:34.819022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" event={"ID":"1265cd44-fdf4-434d-855e-375dcbb70601","Type":"ContainerStarted","Data":"5086e1488919a7e66bd03aa42c971a6510000d1bf1f49439d49317c548bcc089"} Nov 21 20:47:34 crc kubenswrapper[4727]: I1121 20:47:34.860499 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" podStartSLOduration=2.451052809 podStartE2EDuration="2.86046456s" podCreationTimestamp="2025-11-21 20:47:32 +0000 UTC" firstStartedPulling="2025-11-21 20:47:33.809684036 +0000 UTC m=+2458.995869070" lastFinishedPulling="2025-11-21 20:47:34.219095767 +0000 UTC m=+2459.405280821" observedRunningTime="2025-11-21 20:47:34.835072064 +0000 UTC m=+2460.021257108" watchObservedRunningTime="2025-11-21 20:47:34.86046456 +0000 UTC m=+2460.046649614" Nov 21 20:47:41 crc kubenswrapper[4727]: I1121 20:47:41.501039 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:47:41 crc kubenswrapper[4727]: E1121 20:47:41.502169 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:47:53 crc kubenswrapper[4727]: I1121 20:47:53.500009 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:47:53 crc kubenswrapper[4727]: E1121 20:47:53.501373 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:48:05 crc kubenswrapper[4727]: I1121 20:48:05.507856 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:48:05 crc kubenswrapper[4727]: E1121 20:48:05.508874 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:48:18 crc kubenswrapper[4727]: I1121 20:48:18.499609 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:48:18 crc kubenswrapper[4727]: E1121 20:48:18.500256 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:48:31 crc kubenswrapper[4727]: I1121 20:48:31.498838 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:48:31 crc kubenswrapper[4727]: E1121 20:48:31.500557 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:48:43 crc kubenswrapper[4727]: I1121 20:48:43.499786 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:48:43 crc kubenswrapper[4727]: E1121 20:48:43.500941 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:48:54 crc kubenswrapper[4727]: I1121 20:48:54.499931 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:48:54 crc kubenswrapper[4727]: E1121 20:48:54.500770 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:49:06 crc kubenswrapper[4727]: I1121 20:49:06.499732 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:49:06 crc kubenswrapper[4727]: E1121 20:49:06.500665 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:49:21 crc kubenswrapper[4727]: I1121 20:49:21.499581 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:49:21 crc kubenswrapper[4727]: E1121 20:49:21.500736 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:49:36 crc kubenswrapper[4727]: I1121 20:49:36.499932 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:49:36 crc kubenswrapper[4727]: E1121 20:49:36.501539 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:49:48 crc kubenswrapper[4727]: I1121 20:49:48.500580 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:49:49 crc kubenswrapper[4727]: I1121 20:49:49.412350 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"879bff46486447abf99a62cf06cf8872baaf779ffea76353295d51f5353388c2"} Nov 21 20:50:09 crc kubenswrapper[4727]: I1121 20:50:09.630951 4727 generic.go:334] "Generic (PLEG): container finished" podID="1265cd44-fdf4-434d-855e-375dcbb70601" containerID="74f1cd80469d06feaabcf9819cc30a66f260d3174f11f38b4d7aa01cfb80ee31" exitCode=0 Nov 21 20:50:09 crc kubenswrapper[4727]: I1121 20:50:09.631066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" event={"ID":"1265cd44-fdf4-434d-855e-375dcbb70601","Type":"ContainerDied","Data":"74f1cd80469d06feaabcf9819cc30a66f260d3174f11f38b4d7aa01cfb80ee31"} Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.084995 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.125526 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-0\") pod \"1265cd44-fdf4-434d-855e-375dcbb70601\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.125593 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-ssh-key\") pod \"1265cd44-fdf4-434d-855e-375dcbb70601\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.125643 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1265cd44-fdf4-434d-855e-375dcbb70601-nova-extra-config-0\") pod \"1265cd44-fdf4-434d-855e-375dcbb70601\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.125676 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-combined-ca-bundle\") pod \"1265cd44-fdf4-434d-855e-375dcbb70601\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.125724 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-1\") pod \"1265cd44-fdf4-434d-855e-375dcbb70601\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.125789 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhnz4\" (UniqueName: \"kubernetes.io/projected/1265cd44-fdf4-434d-855e-375dcbb70601-kube-api-access-lhnz4\") pod \"1265cd44-fdf4-434d-855e-375dcbb70601\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.125814 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-inventory\") pod \"1265cd44-fdf4-434d-855e-375dcbb70601\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.125840 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-1\") pod \"1265cd44-fdf4-434d-855e-375dcbb70601\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.125921 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-0\") pod \"1265cd44-fdf4-434d-855e-375dcbb70601\" (UID: \"1265cd44-fdf4-434d-855e-375dcbb70601\") " Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.133084 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1265cd44-fdf4-434d-855e-375dcbb70601-kube-api-access-lhnz4" (OuterVolumeSpecName: "kube-api-access-lhnz4") pod "1265cd44-fdf4-434d-855e-375dcbb70601" (UID: "1265cd44-fdf4-434d-855e-375dcbb70601"). InnerVolumeSpecName "kube-api-access-lhnz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.137252 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1265cd44-fdf4-434d-855e-375dcbb70601" (UID: "1265cd44-fdf4-434d-855e-375dcbb70601"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.172299 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "1265cd44-fdf4-434d-855e-375dcbb70601" (UID: "1265cd44-fdf4-434d-855e-375dcbb70601"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.177199 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "1265cd44-fdf4-434d-855e-375dcbb70601" (UID: "1265cd44-fdf4-434d-855e-375dcbb70601"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.182680 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1265cd44-fdf4-434d-855e-375dcbb70601-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "1265cd44-fdf4-434d-855e-375dcbb70601" (UID: "1265cd44-fdf4-434d-855e-375dcbb70601"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.194152 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "1265cd44-fdf4-434d-855e-375dcbb70601" (UID: "1265cd44-fdf4-434d-855e-375dcbb70601"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.198040 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1265cd44-fdf4-434d-855e-375dcbb70601" (UID: "1265cd44-fdf4-434d-855e-375dcbb70601"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.212161 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-inventory" (OuterVolumeSpecName: "inventory") pod "1265cd44-fdf4-434d-855e-375dcbb70601" (UID: "1265cd44-fdf4-434d-855e-375dcbb70601"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.223263 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "1265cd44-fdf4-434d-855e-375dcbb70601" (UID: "1265cd44-fdf4-434d-855e-375dcbb70601"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.229222 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhnz4\" (UniqueName: \"kubernetes.io/projected/1265cd44-fdf4-434d-855e-375dcbb70601-kube-api-access-lhnz4\") on node \"crc\" DevicePath \"\"" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.229258 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.229269 4727 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.229280 4727 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.229289 4727 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.229298 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.229309 4727 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1265cd44-fdf4-434d-855e-375dcbb70601-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.229317 4727 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.229328 4727 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1265cd44-fdf4-434d-855e-375dcbb70601-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.651094 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" event={"ID":"1265cd44-fdf4-434d-855e-375dcbb70601","Type":"ContainerDied","Data":"5086e1488919a7e66bd03aa42c971a6510000d1bf1f49439d49317c548bcc089"} Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.651154 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5086e1488919a7e66bd03aa42c971a6510000d1bf1f49439d49317c548bcc089" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.651172 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6k6pn" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.767113 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v"] Nov 21 20:50:11 crc kubenswrapper[4727]: E1121 20:50:11.767675 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1265cd44-fdf4-434d-855e-375dcbb70601" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.767696 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1265cd44-fdf4-434d-855e-375dcbb70601" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.767940 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1265cd44-fdf4-434d-855e-375dcbb70601" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.768796 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.771630 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.771839 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.772006 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.772428 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.773385 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.780199 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v"] Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.841666 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9v2\" (UniqueName: \"kubernetes.io/projected/43b699c5-7ed9-4603-aa3c-a8d1092571ff-kube-api-access-cf9v2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.841768 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.841832 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.841856 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.841987 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.842045 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.842225 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.944200 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9v2\" (UniqueName: \"kubernetes.io/projected/43b699c5-7ed9-4603-aa3c-a8d1092571ff-kube-api-access-cf9v2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.944537 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.944581 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.944599 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.944642 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.944672 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.944747 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.949901 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.949941 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.955387 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.955467 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.956126 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.956509 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:11 crc kubenswrapper[4727]: I1121 20:50:11.964151 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9v2\" (UniqueName: \"kubernetes.io/projected/43b699c5-7ed9-4603-aa3c-a8d1092571ff-kube-api-access-cf9v2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:12 crc kubenswrapper[4727]: I1121 20:50:12.086605 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:50:12 crc kubenswrapper[4727]: I1121 20:50:12.654208 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v"] Nov 21 20:50:12 crc kubenswrapper[4727]: I1121 20:50:12.664482 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 20:50:13 crc kubenswrapper[4727]: I1121 20:50:13.671394 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" event={"ID":"43b699c5-7ed9-4603-aa3c-a8d1092571ff","Type":"ContainerStarted","Data":"d4bbc6870ff073f399ab5f2be5d7fce9b2f9a890cff032ea42849123bd2968f0"} Nov 21 20:50:13 crc kubenswrapper[4727]: I1121 20:50:13.671946 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" event={"ID":"43b699c5-7ed9-4603-aa3c-a8d1092571ff","Type":"ContainerStarted","Data":"6150a45a2ce365d3db15006728c6702e45a7d65d34cf337847cd65de95c82dbb"} Nov 21 20:50:13 crc kubenswrapper[4727]: I1121 20:50:13.702806 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" podStartSLOduration=2.2867045040000002 podStartE2EDuration="2.702790884s" podCreationTimestamp="2025-11-21 20:50:11 +0000 UTC" firstStartedPulling="2025-11-21 20:50:12.664202149 +0000 UTC m=+2617.850387193" lastFinishedPulling="2025-11-21 20:50:13.080288529 +0000 UTC m=+2618.266473573" observedRunningTime="2025-11-21 20:50:13.696421291 +0000 UTC m=+2618.882606335" watchObservedRunningTime="2025-11-21 20:50:13.702790884 +0000 UTC m=+2618.888975918" Nov 21 20:52:13 crc kubenswrapper[4727]: I1121 20:52:13.335767 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:52:13 crc kubenswrapper[4727]: I1121 20:52:13.336488 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:52:24 crc kubenswrapper[4727]: I1121 20:52:24.128623 4727 generic.go:334] "Generic (PLEG): container finished" podID="43b699c5-7ed9-4603-aa3c-a8d1092571ff" containerID="d4bbc6870ff073f399ab5f2be5d7fce9b2f9a890cff032ea42849123bd2968f0" exitCode=0 Nov 21 20:52:24 crc kubenswrapper[4727]: I1121 20:52:24.128668 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" event={"ID":"43b699c5-7ed9-4603-aa3c-a8d1092571ff","Type":"ContainerDied","Data":"d4bbc6870ff073f399ab5f2be5d7fce9b2f9a890cff032ea42849123bd2968f0"} Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.612631 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.765839 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf9v2\" (UniqueName: \"kubernetes.io/projected/43b699c5-7ed9-4603-aa3c-a8d1092571ff-kube-api-access-cf9v2\") pod \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.765887 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-telemetry-combined-ca-bundle\") pod \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.766061 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-inventory\") pod \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.766699 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-2\") pod \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.766812 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-0\") pod \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.766923 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-1\") pod \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.767008 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ssh-key\") pod \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\" (UID: \"43b699c5-7ed9-4603-aa3c-a8d1092571ff\") " Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.771620 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "43b699c5-7ed9-4603-aa3c-a8d1092571ff" (UID: "43b699c5-7ed9-4603-aa3c-a8d1092571ff"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.772112 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b699c5-7ed9-4603-aa3c-a8d1092571ff-kube-api-access-cf9v2" (OuterVolumeSpecName: "kube-api-access-cf9v2") pod "43b699c5-7ed9-4603-aa3c-a8d1092571ff" (UID: "43b699c5-7ed9-4603-aa3c-a8d1092571ff"). InnerVolumeSpecName "kube-api-access-cf9v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.799268 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "43b699c5-7ed9-4603-aa3c-a8d1092571ff" (UID: "43b699c5-7ed9-4603-aa3c-a8d1092571ff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.801105 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-inventory" (OuterVolumeSpecName: "inventory") pod "43b699c5-7ed9-4603-aa3c-a8d1092571ff" (UID: "43b699c5-7ed9-4603-aa3c-a8d1092571ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.801124 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "43b699c5-7ed9-4603-aa3c-a8d1092571ff" (UID: "43b699c5-7ed9-4603-aa3c-a8d1092571ff"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.802402 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "43b699c5-7ed9-4603-aa3c-a8d1092571ff" (UID: "43b699c5-7ed9-4603-aa3c-a8d1092571ff"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.803251 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "43b699c5-7ed9-4603-aa3c-a8d1092571ff" (UID: "43b699c5-7ed9-4603-aa3c-a8d1092571ff"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.868849 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.868891 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.868905 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.868920 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.868934 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf9v2\" (UniqueName: \"kubernetes.io/projected/43b699c5-7ed9-4603-aa3c-a8d1092571ff-kube-api-access-cf9v2\") on node \"crc\" DevicePath \"\"" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.868948 4727 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:52:25 crc kubenswrapper[4727]: I1121 20:52:25.868979 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43b699c5-7ed9-4603-aa3c-a8d1092571ff-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.151091 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" event={"ID":"43b699c5-7ed9-4603-aa3c-a8d1092571ff","Type":"ContainerDied","Data":"6150a45a2ce365d3db15006728c6702e45a7d65d34cf337847cd65de95c82dbb"} Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.151123 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6150a45a2ce365d3db15006728c6702e45a7d65d34cf337847cd65de95c82dbb" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.151173 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.248105 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m"] Nov 21 20:52:26 crc kubenswrapper[4727]: E1121 20:52:26.248579 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b699c5-7ed9-4603-aa3c-a8d1092571ff" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.248601 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b699c5-7ed9-4603-aa3c-a8d1092571ff" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.248871 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b699c5-7ed9-4603-aa3c-a8d1092571ff" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.250060 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.253319 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.253488 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.253575 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.253653 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.254486 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.262448 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m"] Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.381279 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.381328 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xbxd\" (UniqueName: \"kubernetes.io/projected/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-kube-api-access-9xbxd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.381455 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.381539 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.381562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.381590 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.381798 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.484430 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.484806 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.484845 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.484868 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.484933 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.485035 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.485072 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xbxd\" (UniqueName: \"kubernetes.io/projected/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-kube-api-access-9xbxd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.491154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.491528 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.493878 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.504977 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.505026 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.505651 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.508315 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xbxd\" (UniqueName: \"kubernetes.io/projected/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-kube-api-access-9xbxd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:26 crc kubenswrapper[4727]: I1121 20:52:26.576752 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:52:27 crc kubenswrapper[4727]: I1121 20:52:27.156613 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m"] Nov 21 20:52:28 crc kubenswrapper[4727]: I1121 20:52:28.173837 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" event={"ID":"98365c22-40a3-40cd-95e7-8b7ff8e27c2f","Type":"ContainerStarted","Data":"9ce154b9334902aa9b8e754a95c6740a7dd112ad6787d8fdd9d489b5550f96e5"} Nov 21 20:52:28 crc kubenswrapper[4727]: I1121 20:52:28.174296 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" event={"ID":"98365c22-40a3-40cd-95e7-8b7ff8e27c2f","Type":"ContainerStarted","Data":"333f39f076e7f555cf199b45f9b0d7971587749f04f0c1ddabbe652e2a7e19d3"} Nov 21 20:52:43 crc kubenswrapper[4727]: I1121 20:52:43.335501 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:52:43 crc kubenswrapper[4727]: I1121 20:52:43.336198 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:53:13 crc kubenswrapper[4727]: I1121 20:53:13.335195 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:53:13 crc kubenswrapper[4727]: I1121 20:53:13.335770 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:53:13 crc kubenswrapper[4727]: I1121 20:53:13.335821 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:53:13 crc kubenswrapper[4727]: I1121 20:53:13.336690 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"879bff46486447abf99a62cf06cf8872baaf779ffea76353295d51f5353388c2"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:53:13 crc kubenswrapper[4727]: I1121 20:53:13.336745 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://879bff46486447abf99a62cf06cf8872baaf779ffea76353295d51f5353388c2" gracePeriod=600 Nov 21 20:53:13 crc kubenswrapper[4727]: I1121 20:53:13.679307 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="879bff46486447abf99a62cf06cf8872baaf779ffea76353295d51f5353388c2" exitCode=0 Nov 21 20:53:13 crc kubenswrapper[4727]: I1121 20:53:13.679332 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"879bff46486447abf99a62cf06cf8872baaf779ffea76353295d51f5353388c2"} Nov 21 20:53:13 crc kubenswrapper[4727]: I1121 20:53:13.679848 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307"} Nov 21 20:53:13 crc kubenswrapper[4727]: I1121 20:53:13.679872 4727 scope.go:117] "RemoveContainer" containerID="897f2adda85608a5dbb7a817daa3de6c7e34ae3f7f02b518fee2c34299b91df0" Nov 21 20:53:13 crc kubenswrapper[4727]: I1121 20:53:13.702203 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" podStartSLOduration=47.139024542 podStartE2EDuration="47.702180524s" podCreationTimestamp="2025-11-21 20:52:26 +0000 UTC" firstStartedPulling="2025-11-21 20:52:27.165006168 +0000 UTC m=+2752.351191212" lastFinishedPulling="2025-11-21 20:52:27.72816215 +0000 UTC m=+2752.914347194" observedRunningTime="2025-11-21 20:52:28.197445965 +0000 UTC m=+2753.383631009" watchObservedRunningTime="2025-11-21 20:53:13.702180524 +0000 UTC m=+2798.888365578" Nov 21 20:54:22 crc kubenswrapper[4727]: I1121 20:54:22.455004 4727 generic.go:334] "Generic (PLEG): container finished" podID="98365c22-40a3-40cd-95e7-8b7ff8e27c2f" containerID="9ce154b9334902aa9b8e754a95c6740a7dd112ad6787d8fdd9d489b5550f96e5" exitCode=0 Nov 21 20:54:22 crc kubenswrapper[4727]: I1121 20:54:22.455161 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" event={"ID":"98365c22-40a3-40cd-95e7-8b7ff8e27c2f","Type":"ContainerDied","Data":"9ce154b9334902aa9b8e754a95c6740a7dd112ad6787d8fdd9d489b5550f96e5"} Nov 21 20:54:23 crc kubenswrapper[4727]: I1121 20:54:23.942114 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.052754 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-inventory\") pod \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.052807 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ssh-key\") pod \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.053936 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-2\") pod \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.054209 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-1\") pod \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.054256 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-telemetry-power-monitoring-combined-ca-bundle\") pod \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.054292 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-0\") pod \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.054408 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xbxd\" (UniqueName: \"kubernetes.io/projected/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-kube-api-access-9xbxd\") pod \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\" (UID: \"98365c22-40a3-40cd-95e7-8b7ff8e27c2f\") " Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.065445 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-kube-api-access-9xbxd" (OuterVolumeSpecName: "kube-api-access-9xbxd") pod "98365c22-40a3-40cd-95e7-8b7ff8e27c2f" (UID: "98365c22-40a3-40cd-95e7-8b7ff8e27c2f"). InnerVolumeSpecName "kube-api-access-9xbxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.068310 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "98365c22-40a3-40cd-95e7-8b7ff8e27c2f" (UID: "98365c22-40a3-40cd-95e7-8b7ff8e27c2f"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.086860 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "98365c22-40a3-40cd-95e7-8b7ff8e27c2f" (UID: "98365c22-40a3-40cd-95e7-8b7ff8e27c2f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.087781 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "98365c22-40a3-40cd-95e7-8b7ff8e27c2f" (UID: "98365c22-40a3-40cd-95e7-8b7ff8e27c2f"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.090532 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "98365c22-40a3-40cd-95e7-8b7ff8e27c2f" (UID: "98365c22-40a3-40cd-95e7-8b7ff8e27c2f"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.094375 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "98365c22-40a3-40cd-95e7-8b7ff8e27c2f" (UID: "98365c22-40a3-40cd-95e7-8b7ff8e27c2f"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.094746 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-inventory" (OuterVolumeSpecName: "inventory") pod "98365c22-40a3-40cd-95e7-8b7ff8e27c2f" (UID: "98365c22-40a3-40cd-95e7-8b7ff8e27c2f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.157275 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.157326 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.157343 4727 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.157361 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.157376 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xbxd\" (UniqueName: \"kubernetes.io/projected/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-kube-api-access-9xbxd\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.157389 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.157400 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98365c22-40a3-40cd-95e7-8b7ff8e27c2f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.615078 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" event={"ID":"98365c22-40a3-40cd-95e7-8b7ff8e27c2f","Type":"ContainerDied","Data":"333f39f076e7f555cf199b45f9b0d7971587749f04f0c1ddabbe652e2a7e19d3"} Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.615126 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333f39f076e7f555cf199b45f9b0d7971587749f04f0c1ddabbe652e2a7e19d3" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.615162 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.665825 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8"] Nov 21 20:54:24 crc kubenswrapper[4727]: E1121 20:54:24.666324 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98365c22-40a3-40cd-95e7-8b7ff8e27c2f" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.666343 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="98365c22-40a3-40cd-95e7-8b7ff8e27c2f" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.666590 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="98365c22-40a3-40cd-95e7-8b7ff8e27c2f" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.667372 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.670193 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.670498 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-9h8bp" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.670628 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.673796 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.674117 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.678136 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8"] Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.809354 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msj8b\" (UniqueName: \"kubernetes.io/projected/06a62eb8-65b3-4e96-8103-e9386bbca277-kube-api-access-msj8b\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.809499 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.809648 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.809799 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.809922 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.912155 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.912353 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msj8b\" (UniqueName: \"kubernetes.io/projected/06a62eb8-65b3-4e96-8103-e9386bbca277-kube-api-access-msj8b\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.912385 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.912427 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.912502 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.916239 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.916894 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.919129 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.920113 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.927855 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msj8b\" (UniqueName: \"kubernetes.io/projected/06a62eb8-65b3-4e96-8103-e9386bbca277-kube-api-access-msj8b\") pod \"logging-edpm-deployment-openstack-edpm-ipam-g6mt8\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:24 crc kubenswrapper[4727]: I1121 20:54:24.985534 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:25 crc kubenswrapper[4727]: I1121 20:54:25.558073 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8"] Nov 21 20:54:25 crc kubenswrapper[4727]: I1121 20:54:25.632065 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" event={"ID":"06a62eb8-65b3-4e96-8103-e9386bbca277","Type":"ContainerStarted","Data":"663d2324808b23b982ebd12a3e418c0f61b7dc0156e6b8c1a82ccc16245f5c1a"} Nov 21 20:54:26 crc kubenswrapper[4727]: I1121 20:54:26.658545 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" event={"ID":"06a62eb8-65b3-4e96-8103-e9386bbca277","Type":"ContainerStarted","Data":"ed943fe9e3c9085f69a999a784828f70b1d8b749691345a9b6b22965bcd807a2"} Nov 21 20:54:26 crc kubenswrapper[4727]: I1121 20:54:26.680324 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" podStartSLOduration=2.180726496 podStartE2EDuration="2.680301702s" podCreationTimestamp="2025-11-21 20:54:24 +0000 UTC" firstStartedPulling="2025-11-21 20:54:25.571814385 +0000 UTC m=+2870.757999439" lastFinishedPulling="2025-11-21 20:54:26.071389601 +0000 UTC m=+2871.257574645" observedRunningTime="2025-11-21 20:54:26.679713177 +0000 UTC m=+2871.865898231" watchObservedRunningTime="2025-11-21 20:54:26.680301702 +0000 UTC m=+2871.866486756" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.217005 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mr2pl"] Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.222011 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.246007 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mr2pl"] Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.249341 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-catalog-content\") pod \"community-operators-mr2pl\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.249430 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hltqs\" (UniqueName: \"kubernetes.io/projected/fb913d00-607a-4f54-81fb-4589183e0e95-kube-api-access-hltqs\") pod \"community-operators-mr2pl\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.249455 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-utilities\") pod \"community-operators-mr2pl\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.352807 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-catalog-content\") pod \"community-operators-mr2pl\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.353220 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hltqs\" (UniqueName: \"kubernetes.io/projected/fb913d00-607a-4f54-81fb-4589183e0e95-kube-api-access-hltqs\") pod \"community-operators-mr2pl\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.353257 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-utilities\") pod \"community-operators-mr2pl\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.353373 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-catalog-content\") pod \"community-operators-mr2pl\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.353763 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-utilities\") pod \"community-operators-mr2pl\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.377738 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hltqs\" (UniqueName: \"kubernetes.io/projected/fb913d00-607a-4f54-81fb-4589183e0e95-kube-api-access-hltqs\") pod \"community-operators-mr2pl\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.541590 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.836876 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bllfm"] Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.841728 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:35 crc kubenswrapper[4727]: I1121 20:54:35.868211 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bllfm"] Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.021852 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-catalog-content\") pod \"redhat-marketplace-bllfm\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.022265 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdft\" (UniqueName: \"kubernetes.io/projected/8885e970-9f7f-4dda-bf15-790c4012baf2-kube-api-access-gjdft\") pod \"redhat-marketplace-bllfm\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.022303 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-utilities\") pod \"redhat-marketplace-bllfm\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.124697 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-catalog-content\") pod \"redhat-marketplace-bllfm\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.125441 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdft\" (UniqueName: \"kubernetes.io/projected/8885e970-9f7f-4dda-bf15-790c4012baf2-kube-api-access-gjdft\") pod \"redhat-marketplace-bllfm\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.125352 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-catalog-content\") pod \"redhat-marketplace-bllfm\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.125832 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-utilities\") pod \"redhat-marketplace-bllfm\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.126322 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-utilities\") pod \"redhat-marketplace-bllfm\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.157309 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdft\" (UniqueName: \"kubernetes.io/projected/8885e970-9f7f-4dda-bf15-790c4012baf2-kube-api-access-gjdft\") pod \"redhat-marketplace-bllfm\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.182397 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.225712 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mr2pl"] Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.672928 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bllfm"] Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.759926 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bllfm" event={"ID":"8885e970-9f7f-4dda-bf15-790c4012baf2","Type":"ContainerStarted","Data":"b5f09d8c8da243d4e2e996412145f0d5c8395c4197c44dc4cf090babbe724eb7"} Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.761858 4727 generic.go:334] "Generic (PLEG): container finished" podID="fb913d00-607a-4f54-81fb-4589183e0e95" containerID="b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54" exitCode=0 Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.761900 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr2pl" event={"ID":"fb913d00-607a-4f54-81fb-4589183e0e95","Type":"ContainerDied","Data":"b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54"} Nov 21 20:54:36 crc kubenswrapper[4727]: I1121 20:54:36.761929 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr2pl" event={"ID":"fb913d00-607a-4f54-81fb-4589183e0e95","Type":"ContainerStarted","Data":"5a21c8db47c91c19028333e87aa973eeda7aa055b28767cc4a3beb696c7bba9b"} Nov 21 20:54:37 crc kubenswrapper[4727]: I1121 20:54:37.782438 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr2pl" event={"ID":"fb913d00-607a-4f54-81fb-4589183e0e95","Type":"ContainerStarted","Data":"9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1"} Nov 21 20:54:37 crc kubenswrapper[4727]: I1121 20:54:37.786016 4727 generic.go:334] "Generic (PLEG): container finished" podID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerID="eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690" exitCode=0 Nov 21 20:54:37 crc kubenswrapper[4727]: I1121 20:54:37.786066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bllfm" event={"ID":"8885e970-9f7f-4dda-bf15-790c4012baf2","Type":"ContainerDied","Data":"eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690"} Nov 21 20:54:38 crc kubenswrapper[4727]: I1121 20:54:38.797028 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bllfm" event={"ID":"8885e970-9f7f-4dda-bf15-790c4012baf2","Type":"ContainerStarted","Data":"a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b"} Nov 21 20:54:38 crc kubenswrapper[4727]: I1121 20:54:38.798877 4727 generic.go:334] "Generic (PLEG): container finished" podID="fb913d00-607a-4f54-81fb-4589183e0e95" containerID="9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1" exitCode=0 Nov 21 20:54:38 crc kubenswrapper[4727]: I1121 20:54:38.798917 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr2pl" event={"ID":"fb913d00-607a-4f54-81fb-4589183e0e95","Type":"ContainerDied","Data":"9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1"} Nov 21 20:54:39 crc kubenswrapper[4727]: I1121 20:54:39.828607 4727 generic.go:334] "Generic (PLEG): container finished" podID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerID="a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b" exitCode=0 Nov 21 20:54:39 crc kubenswrapper[4727]: I1121 20:54:39.829225 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bllfm" event={"ID":"8885e970-9f7f-4dda-bf15-790c4012baf2","Type":"ContainerDied","Data":"a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b"} Nov 21 20:54:39 crc kubenswrapper[4727]: I1121 20:54:39.836216 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr2pl" event={"ID":"fb913d00-607a-4f54-81fb-4589183e0e95","Type":"ContainerStarted","Data":"9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949"} Nov 21 20:54:39 crc kubenswrapper[4727]: I1121 20:54:39.873674 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mr2pl" podStartSLOduration=2.45146532 podStartE2EDuration="4.8736453s" podCreationTimestamp="2025-11-21 20:54:35 +0000 UTC" firstStartedPulling="2025-11-21 20:54:36.763407471 +0000 UTC m=+2881.949592515" lastFinishedPulling="2025-11-21 20:54:39.185587451 +0000 UTC m=+2884.371772495" observedRunningTime="2025-11-21 20:54:39.867551283 +0000 UTC m=+2885.053736327" watchObservedRunningTime="2025-11-21 20:54:39.8736453 +0000 UTC m=+2885.059830354" Nov 21 20:54:40 crc kubenswrapper[4727]: I1121 20:54:40.849849 4727 generic.go:334] "Generic (PLEG): container finished" podID="06a62eb8-65b3-4e96-8103-e9386bbca277" containerID="ed943fe9e3c9085f69a999a784828f70b1d8b749691345a9b6b22965bcd807a2" exitCode=0 Nov 21 20:54:40 crc kubenswrapper[4727]: I1121 20:54:40.849937 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" event={"ID":"06a62eb8-65b3-4e96-8103-e9386bbca277","Type":"ContainerDied","Data":"ed943fe9e3c9085f69a999a784828f70b1d8b749691345a9b6b22965bcd807a2"} Nov 21 20:54:40 crc kubenswrapper[4727]: I1121 20:54:40.854064 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bllfm" event={"ID":"8885e970-9f7f-4dda-bf15-790c4012baf2","Type":"ContainerStarted","Data":"00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09"} Nov 21 20:54:40 crc kubenswrapper[4727]: I1121 20:54:40.888273 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bllfm" podStartSLOduration=3.217493519 podStartE2EDuration="5.888252453s" podCreationTimestamp="2025-11-21 20:54:35 +0000 UTC" firstStartedPulling="2025-11-21 20:54:37.79115401 +0000 UTC m=+2882.977339054" lastFinishedPulling="2025-11-21 20:54:40.461912954 +0000 UTC m=+2885.648097988" observedRunningTime="2025-11-21 20:54:40.879487742 +0000 UTC m=+2886.065672786" watchObservedRunningTime="2025-11-21 20:54:40.888252453 +0000 UTC m=+2886.074437497" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.451043 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.476752 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-1\") pod \"06a62eb8-65b3-4e96-8103-e9386bbca277\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.476806 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-0\") pod \"06a62eb8-65b3-4e96-8103-e9386bbca277\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.476875 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-inventory\") pod \"06a62eb8-65b3-4e96-8103-e9386bbca277\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.477076 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msj8b\" (UniqueName: \"kubernetes.io/projected/06a62eb8-65b3-4e96-8103-e9386bbca277-kube-api-access-msj8b\") pod \"06a62eb8-65b3-4e96-8103-e9386bbca277\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.477125 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-ssh-key\") pod \"06a62eb8-65b3-4e96-8103-e9386bbca277\" (UID: \"06a62eb8-65b3-4e96-8103-e9386bbca277\") " Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.485181 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a62eb8-65b3-4e96-8103-e9386bbca277-kube-api-access-msj8b" (OuterVolumeSpecName: "kube-api-access-msj8b") pod "06a62eb8-65b3-4e96-8103-e9386bbca277" (UID: "06a62eb8-65b3-4e96-8103-e9386bbca277"). InnerVolumeSpecName "kube-api-access-msj8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.516658 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06a62eb8-65b3-4e96-8103-e9386bbca277" (UID: "06a62eb8-65b3-4e96-8103-e9386bbca277"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.518136 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "06a62eb8-65b3-4e96-8103-e9386bbca277" (UID: "06a62eb8-65b3-4e96-8103-e9386bbca277"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.528445 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-inventory" (OuterVolumeSpecName: "inventory") pod "06a62eb8-65b3-4e96-8103-e9386bbca277" (UID: "06a62eb8-65b3-4e96-8103-e9386bbca277"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.538951 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "06a62eb8-65b3-4e96-8103-e9386bbca277" (UID: "06a62eb8-65b3-4e96-8103-e9386bbca277"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.579456 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msj8b\" (UniqueName: \"kubernetes.io/projected/06a62eb8-65b3-4e96-8103-e9386bbca277-kube-api-access-msj8b\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.579488 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.579499 4727 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.579514 4727 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.579524 4727 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a62eb8-65b3-4e96-8103-e9386bbca277-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.874882 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" event={"ID":"06a62eb8-65b3-4e96-8103-e9386bbca277","Type":"ContainerDied","Data":"663d2324808b23b982ebd12a3e418c0f61b7dc0156e6b8c1a82ccc16245f5c1a"} Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.874917 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="663d2324808b23b982ebd12a3e418c0f61b7dc0156e6b8c1a82ccc16245f5c1a" Nov 21 20:54:42 crc kubenswrapper[4727]: I1121 20:54:42.874928 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-g6mt8" Nov 21 20:54:45 crc kubenswrapper[4727]: I1121 20:54:45.544449 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:45 crc kubenswrapper[4727]: I1121 20:54:45.545639 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:45 crc kubenswrapper[4727]: I1121 20:54:45.598880 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:45 crc kubenswrapper[4727]: I1121 20:54:45.960075 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:46 crc kubenswrapper[4727]: I1121 20:54:46.182843 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:46 crc kubenswrapper[4727]: I1121 20:54:46.183164 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:46 crc kubenswrapper[4727]: I1121 20:54:46.209753 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mr2pl"] Nov 21 20:54:46 crc kubenswrapper[4727]: I1121 20:54:46.252025 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:46 crc kubenswrapper[4727]: I1121 20:54:46.983776 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:47 crc kubenswrapper[4727]: I1121 20:54:47.939730 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mr2pl" podUID="fb913d00-607a-4f54-81fb-4589183e0e95" containerName="registry-server" containerID="cri-o://9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949" gracePeriod=2 Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.459638 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.608243 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bllfm"] Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.642600 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-utilities\") pod \"fb913d00-607a-4f54-81fb-4589183e0e95\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.642928 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-catalog-content\") pod \"fb913d00-607a-4f54-81fb-4589183e0e95\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.642984 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hltqs\" (UniqueName: \"kubernetes.io/projected/fb913d00-607a-4f54-81fb-4589183e0e95-kube-api-access-hltqs\") pod \"fb913d00-607a-4f54-81fb-4589183e0e95\" (UID: \"fb913d00-607a-4f54-81fb-4589183e0e95\") " Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.643492 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-utilities" (OuterVolumeSpecName: "utilities") pod "fb913d00-607a-4f54-81fb-4589183e0e95" (UID: "fb913d00-607a-4f54-81fb-4589183e0e95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.643644 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.650157 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb913d00-607a-4f54-81fb-4589183e0e95-kube-api-access-hltqs" (OuterVolumeSpecName: "kube-api-access-hltqs") pod "fb913d00-607a-4f54-81fb-4589183e0e95" (UID: "fb913d00-607a-4f54-81fb-4589183e0e95"). InnerVolumeSpecName "kube-api-access-hltqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.687817 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb913d00-607a-4f54-81fb-4589183e0e95" (UID: "fb913d00-607a-4f54-81fb-4589183e0e95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.745370 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb913d00-607a-4f54-81fb-4589183e0e95-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.745414 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hltqs\" (UniqueName: \"kubernetes.io/projected/fb913d00-607a-4f54-81fb-4589183e0e95-kube-api-access-hltqs\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.950380 4727 generic.go:334] "Generic (PLEG): container finished" podID="fb913d00-607a-4f54-81fb-4589183e0e95" containerID="9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949" exitCode=0 Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.950450 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr2pl" Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.950478 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr2pl" event={"ID":"fb913d00-607a-4f54-81fb-4589183e0e95","Type":"ContainerDied","Data":"9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949"} Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.950510 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr2pl" event={"ID":"fb913d00-607a-4f54-81fb-4589183e0e95","Type":"ContainerDied","Data":"5a21c8db47c91c19028333e87aa973eeda7aa055b28767cc4a3beb696c7bba9b"} Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.950527 4727 scope.go:117] "RemoveContainer" containerID="9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949" Nov 21 20:54:48 crc kubenswrapper[4727]: I1121 20:54:48.972232 4727 scope.go:117] "RemoveContainer" containerID="9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1" Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.001012 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mr2pl"] Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.013596 4727 scope.go:117] "RemoveContainer" containerID="b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54" Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.014226 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mr2pl"] Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.069091 4727 scope.go:117] "RemoveContainer" containerID="9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949" Nov 21 20:54:49 crc kubenswrapper[4727]: E1121 20:54:49.070029 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949\": container with ID starting with 9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949 not found: ID does not exist" containerID="9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949" Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.070093 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949"} err="failed to get container status \"9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949\": rpc error: code = NotFound desc = could not find container \"9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949\": container with ID starting with 9736a32ebcbd4f34cee42c56aef3659d43a40cf31fefa192e82739e3ea021949 not found: ID does not exist" Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.070143 4727 scope.go:117] "RemoveContainer" containerID="9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1" Nov 21 20:54:49 crc kubenswrapper[4727]: E1121 20:54:49.070992 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1\": container with ID starting with 9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1 not found: ID does not exist" containerID="9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1" Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.071066 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1"} err="failed to get container status \"9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1\": rpc error: code = NotFound desc = could not find container \"9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1\": container with ID starting with 9accb27c1d835045650b0a0baefda5d3264c545ec73b1de21746dacd150d43a1 not found: ID does not exist" Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.071122 4727 scope.go:117] "RemoveContainer" containerID="b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54" Nov 21 20:54:49 crc kubenswrapper[4727]: E1121 20:54:49.071683 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54\": container with ID starting with b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54 not found: ID does not exist" containerID="b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54" Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.071717 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54"} err="failed to get container status \"b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54\": rpc error: code = NotFound desc = could not find container \"b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54\": container with ID starting with b89f61e0062b8b24d75156ed7ab3851e89ad142d544954f3cf638ea2ee573a54 not found: ID does not exist" Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.513243 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb913d00-607a-4f54-81fb-4589183e0e95" path="/var/lib/kubelet/pods/fb913d00-607a-4f54-81fb-4589183e0e95/volumes" Nov 21 20:54:49 crc kubenswrapper[4727]: I1121 20:54:49.967508 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bllfm" podUID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerName="registry-server" containerID="cri-o://00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09" gracePeriod=2 Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.470928 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.600459 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjdft\" (UniqueName: \"kubernetes.io/projected/8885e970-9f7f-4dda-bf15-790c4012baf2-kube-api-access-gjdft\") pod \"8885e970-9f7f-4dda-bf15-790c4012baf2\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.601653 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-catalog-content\") pod \"8885e970-9f7f-4dda-bf15-790c4012baf2\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.601857 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-utilities\") pod \"8885e970-9f7f-4dda-bf15-790c4012baf2\" (UID: \"8885e970-9f7f-4dda-bf15-790c4012baf2\") " Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.604170 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-utilities" (OuterVolumeSpecName: "utilities") pod "8885e970-9f7f-4dda-bf15-790c4012baf2" (UID: "8885e970-9f7f-4dda-bf15-790c4012baf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.636703 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8885e970-9f7f-4dda-bf15-790c4012baf2-kube-api-access-gjdft" (OuterVolumeSpecName: "kube-api-access-gjdft") pod "8885e970-9f7f-4dda-bf15-790c4012baf2" (UID: "8885e970-9f7f-4dda-bf15-790c4012baf2"). InnerVolumeSpecName "kube-api-access-gjdft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.670283 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8885e970-9f7f-4dda-bf15-790c4012baf2" (UID: "8885e970-9f7f-4dda-bf15-790c4012baf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.706366 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjdft\" (UniqueName: \"kubernetes.io/projected/8885e970-9f7f-4dda-bf15-790c4012baf2-kube-api-access-gjdft\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.706597 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.706663 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8885e970-9f7f-4dda-bf15-790c4012baf2-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.981658 4727 generic.go:334] "Generic (PLEG): container finished" podID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerID="00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09" exitCode=0 Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.981715 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bllfm" event={"ID":"8885e970-9f7f-4dda-bf15-790c4012baf2","Type":"ContainerDied","Data":"00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09"} Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.981821 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bllfm" Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.982100 4727 scope.go:117] "RemoveContainer" containerID="00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09" Nov 21 20:54:50 crc kubenswrapper[4727]: I1121 20:54:50.982006 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bllfm" event={"ID":"8885e970-9f7f-4dda-bf15-790c4012baf2","Type":"ContainerDied","Data":"b5f09d8c8da243d4e2e996412145f0d5c8395c4197c44dc4cf090babbe724eb7"} Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.001126 4727 scope.go:117] "RemoveContainer" containerID="a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b" Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.031727 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bllfm"] Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.036327 4727 scope.go:117] "RemoveContainer" containerID="eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690" Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.046043 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bllfm"] Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.111370 4727 scope.go:117] "RemoveContainer" containerID="00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09" Nov 21 20:54:51 crc kubenswrapper[4727]: E1121 20:54:51.111913 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09\": container with ID starting with 00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09 not found: ID does not exist" containerID="00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09" Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.111947 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09"} err="failed to get container status \"00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09\": rpc error: code = NotFound desc = could not find container \"00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09\": container with ID starting with 00666762e1cd11d87a8e19f9ccefd4170305c1871b7066ab4eecbbbdcb27bc09 not found: ID does not exist" Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.111988 4727 scope.go:117] "RemoveContainer" containerID="a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b" Nov 21 20:54:51 crc kubenswrapper[4727]: E1121 20:54:51.112400 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b\": container with ID starting with a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b not found: ID does not exist" containerID="a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b" Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.112426 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b"} err="failed to get container status \"a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b\": rpc error: code = NotFound desc = could not find container \"a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b\": container with ID starting with a527e3bcc9223edbd20910ae0135c04264837284da0427315772de703c25f91b not found: ID does not exist" Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.112443 4727 scope.go:117] "RemoveContainer" containerID="eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690" Nov 21 20:54:51 crc kubenswrapper[4727]: E1121 20:54:51.112735 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690\": container with ID starting with eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690 not found: ID does not exist" containerID="eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690" Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.112801 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690"} err="failed to get container status \"eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690\": rpc error: code = NotFound desc = could not find container \"eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690\": container with ID starting with eab279dfd82e81fddd4ce32f43f56442ebf9b1b488ce7e2f3b9c5897ff234690 not found: ID does not exist" Nov 21 20:54:51 crc kubenswrapper[4727]: I1121 20:54:51.517752 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8885e970-9f7f-4dda-bf15-790c4012baf2" path="/var/lib/kubelet/pods/8885e970-9f7f-4dda-bf15-790c4012baf2/volumes" Nov 21 20:55:13 crc kubenswrapper[4727]: I1121 20:55:13.335628 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:55:13 crc kubenswrapper[4727]: I1121 20:55:13.336292 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:55:43 crc kubenswrapper[4727]: I1121 20:55:43.335356 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:55:43 crc kubenswrapper[4727]: I1121 20:55:43.336038 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:56:13 crc kubenswrapper[4727]: I1121 20:56:13.335725 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 20:56:13 crc kubenswrapper[4727]: I1121 20:56:13.336224 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 20:56:13 crc kubenswrapper[4727]: I1121 20:56:13.336269 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 20:56:13 crc kubenswrapper[4727]: I1121 20:56:13.337217 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 20:56:13 crc kubenswrapper[4727]: I1121 20:56:13.337288 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" gracePeriod=600 Nov 21 20:56:13 crc kubenswrapper[4727]: E1121 20:56:13.465897 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:56:13 crc kubenswrapper[4727]: I1121 20:56:13.916599 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" exitCode=0 Nov 21 20:56:13 crc kubenswrapper[4727]: I1121 20:56:13.916651 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307"} Nov 21 20:56:13 crc kubenswrapper[4727]: I1121 20:56:13.916752 4727 scope.go:117] "RemoveContainer" containerID="879bff46486447abf99a62cf06cf8872baaf779ffea76353295d51f5353388c2" Nov 21 20:56:13 crc kubenswrapper[4727]: I1121 20:56:13.917591 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:56:13 crc kubenswrapper[4727]: E1121 20:56:13.917980 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:56:26 crc kubenswrapper[4727]: I1121 20:56:26.499198 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:56:26 crc kubenswrapper[4727]: E1121 20:56:26.499925 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:56:39 crc kubenswrapper[4727]: I1121 20:56:39.499920 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:56:39 crc kubenswrapper[4727]: E1121 20:56:39.500745 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.669700 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pr699"] Nov 21 20:56:47 crc kubenswrapper[4727]: E1121 20:56:47.671845 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a62eb8-65b3-4e96-8103-e9386bbca277" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.671941 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a62eb8-65b3-4e96-8103-e9386bbca277" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 20:56:47 crc kubenswrapper[4727]: E1121 20:56:47.672083 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerName="registry-server" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.672160 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerName="registry-server" Nov 21 20:56:47 crc kubenswrapper[4727]: E1121 20:56:47.672245 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb913d00-607a-4f54-81fb-4589183e0e95" containerName="extract-content" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.672316 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb913d00-607a-4f54-81fb-4589183e0e95" containerName="extract-content" Nov 21 20:56:47 crc kubenswrapper[4727]: E1121 20:56:47.672396 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerName="extract-utilities" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.672465 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerName="extract-utilities" Nov 21 20:56:47 crc kubenswrapper[4727]: E1121 20:56:47.672554 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerName="extract-content" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.672642 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerName="extract-content" Nov 21 20:56:47 crc kubenswrapper[4727]: E1121 20:56:47.672726 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb913d00-607a-4f54-81fb-4589183e0e95" containerName="registry-server" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.672799 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb913d00-607a-4f54-81fb-4589183e0e95" containerName="registry-server" Nov 21 20:56:47 crc kubenswrapper[4727]: E1121 20:56:47.672879 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb913d00-607a-4f54-81fb-4589183e0e95" containerName="extract-utilities" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.672949 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb913d00-607a-4f54-81fb-4589183e0e95" containerName="extract-utilities" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.673349 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a62eb8-65b3-4e96-8103-e9386bbca277" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.673459 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8885e970-9f7f-4dda-bf15-790c4012baf2" containerName="registry-server" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.673571 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb913d00-607a-4f54-81fb-4589183e0e95" containerName="registry-server" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.675773 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.680693 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pr699"] Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.792916 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qs2q\" (UniqueName: \"kubernetes.io/projected/e73608ae-190a-4c12-8bd1-d14a5d898e97-kube-api-access-9qs2q\") pod \"certified-operators-pr699\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.793061 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-catalog-content\") pod \"certified-operators-pr699\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.793142 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-utilities\") pod \"certified-operators-pr699\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.895707 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qs2q\" (UniqueName: \"kubernetes.io/projected/e73608ae-190a-4c12-8bd1-d14a5d898e97-kube-api-access-9qs2q\") pod \"certified-operators-pr699\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.895817 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-catalog-content\") pod \"certified-operators-pr699\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.895909 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-utilities\") pod \"certified-operators-pr699\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.896428 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-catalog-content\") pod \"certified-operators-pr699\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.896443 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-utilities\") pod \"certified-operators-pr699\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:47 crc kubenswrapper[4727]: I1121 20:56:47.916044 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qs2q\" (UniqueName: \"kubernetes.io/projected/e73608ae-190a-4c12-8bd1-d14a5d898e97-kube-api-access-9qs2q\") pod \"certified-operators-pr699\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:48 crc kubenswrapper[4727]: I1121 20:56:48.000625 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:48 crc kubenswrapper[4727]: I1121 20:56:48.600777 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pr699"] Nov 21 20:56:49 crc kubenswrapper[4727]: I1121 20:56:49.504532 4727 generic.go:334] "Generic (PLEG): container finished" podID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerID="685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb" exitCode=0 Nov 21 20:56:49 crc kubenswrapper[4727]: I1121 20:56:49.508338 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 20:56:49 crc kubenswrapper[4727]: I1121 20:56:49.543747 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr699" event={"ID":"e73608ae-190a-4c12-8bd1-d14a5d898e97","Type":"ContainerDied","Data":"685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb"} Nov 21 20:56:49 crc kubenswrapper[4727]: I1121 20:56:49.543793 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr699" event={"ID":"e73608ae-190a-4c12-8bd1-d14a5d898e97","Type":"ContainerStarted","Data":"36e57150a07819b6d5998d0c21a6c0ddcc88c121dbb65526185be4e0e7c1b91b"} Nov 21 20:56:50 crc kubenswrapper[4727]: I1121 20:56:50.515813 4727 generic.go:334] "Generic (PLEG): container finished" podID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerID="22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6" exitCode=0 Nov 21 20:56:50 crc kubenswrapper[4727]: I1121 20:56:50.516187 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr699" event={"ID":"e73608ae-190a-4c12-8bd1-d14a5d898e97","Type":"ContainerDied","Data":"22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6"} Nov 21 20:56:51 crc kubenswrapper[4727]: I1121 20:56:51.530834 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr699" event={"ID":"e73608ae-190a-4c12-8bd1-d14a5d898e97","Type":"ContainerStarted","Data":"98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe"} Nov 21 20:56:55 crc kubenswrapper[4727]: I1121 20:56:55.510302 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:56:55 crc kubenswrapper[4727]: E1121 20:56:55.511124 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:56:58 crc kubenswrapper[4727]: I1121 20:56:58.001640 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:58 crc kubenswrapper[4727]: I1121 20:56:58.002305 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:58 crc kubenswrapper[4727]: I1121 20:56:58.050615 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:58 crc kubenswrapper[4727]: I1121 20:56:58.078690 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pr699" podStartSLOduration=9.656797614 podStartE2EDuration="11.078670597s" podCreationTimestamp="2025-11-21 20:56:47 +0000 UTC" firstStartedPulling="2025-11-21 20:56:49.508069765 +0000 UTC m=+3014.694254809" lastFinishedPulling="2025-11-21 20:56:50.929942748 +0000 UTC m=+3016.116127792" observedRunningTime="2025-11-21 20:56:51.550860688 +0000 UTC m=+3016.737045742" watchObservedRunningTime="2025-11-21 20:56:58.078670597 +0000 UTC m=+3023.264855631" Nov 21 20:56:58 crc kubenswrapper[4727]: I1121 20:56:58.649616 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:56:58 crc kubenswrapper[4727]: I1121 20:56:58.704484 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pr699"] Nov 21 20:57:00 crc kubenswrapper[4727]: I1121 20:57:00.654944 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pr699" podUID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerName="registry-server" containerID="cri-o://98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe" gracePeriod=2 Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.666644 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.703325 4727 generic.go:334] "Generic (PLEG): container finished" podID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerID="98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe" exitCode=0 Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.703374 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr699" event={"ID":"e73608ae-190a-4c12-8bd1-d14a5d898e97","Type":"ContainerDied","Data":"98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe"} Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.703405 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pr699" event={"ID":"e73608ae-190a-4c12-8bd1-d14a5d898e97","Type":"ContainerDied","Data":"36e57150a07819b6d5998d0c21a6c0ddcc88c121dbb65526185be4e0e7c1b91b"} Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.703422 4727 scope.go:117] "RemoveContainer" containerID="98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.703433 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pr699" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.739941 4727 scope.go:117] "RemoveContainer" containerID="22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.759707 4727 scope.go:117] "RemoveContainer" containerID="685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.773067 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-utilities\") pod \"e73608ae-190a-4c12-8bd1-d14a5d898e97\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.773497 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qs2q\" (UniqueName: \"kubernetes.io/projected/e73608ae-190a-4c12-8bd1-d14a5d898e97-kube-api-access-9qs2q\") pod \"e73608ae-190a-4c12-8bd1-d14a5d898e97\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.773597 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-catalog-content\") pod \"e73608ae-190a-4c12-8bd1-d14a5d898e97\" (UID: \"e73608ae-190a-4c12-8bd1-d14a5d898e97\") " Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.774369 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-utilities" (OuterVolumeSpecName: "utilities") pod "e73608ae-190a-4c12-8bd1-d14a5d898e97" (UID: "e73608ae-190a-4c12-8bd1-d14a5d898e97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.779507 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73608ae-190a-4c12-8bd1-d14a5d898e97-kube-api-access-9qs2q" (OuterVolumeSpecName: "kube-api-access-9qs2q") pod "e73608ae-190a-4c12-8bd1-d14a5d898e97" (UID: "e73608ae-190a-4c12-8bd1-d14a5d898e97"). InnerVolumeSpecName "kube-api-access-9qs2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.817455 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e73608ae-190a-4c12-8bd1-d14a5d898e97" (UID: "e73608ae-190a-4c12-8bd1-d14a5d898e97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.865253 4727 scope.go:117] "RemoveContainer" containerID="98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe" Nov 21 20:57:01 crc kubenswrapper[4727]: E1121 20:57:01.865656 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe\": container with ID starting with 98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe not found: ID does not exist" containerID="98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.865689 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe"} err="failed to get container status \"98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe\": rpc error: code = NotFound desc = could not find container \"98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe\": container with ID starting with 98f3d008cf71721386c22304c84e27bae10c4f23110fbf35af67bf9d0ffcf2fe not found: ID does not exist" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.865710 4727 scope.go:117] "RemoveContainer" containerID="22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6" Nov 21 20:57:01 crc kubenswrapper[4727]: E1121 20:57:01.866087 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6\": container with ID starting with 22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6 not found: ID does not exist" containerID="22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.866111 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6"} err="failed to get container status \"22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6\": rpc error: code = NotFound desc = could not find container \"22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6\": container with ID starting with 22edf2b08ca015e5f2cc32b4a5c0cbcc3f050bc152aec4babe65f621d834edd6 not found: ID does not exist" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.866125 4727 scope.go:117] "RemoveContainer" containerID="685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb" Nov 21 20:57:01 crc kubenswrapper[4727]: E1121 20:57:01.866366 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb\": container with ID starting with 685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb not found: ID does not exist" containerID="685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.866388 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb"} err="failed to get container status \"685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb\": rpc error: code = NotFound desc = could not find container \"685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb\": container with ID starting with 685c2c70fcb0f7a5be4337587d67af499ca891397d399c60467ef8d2128ed9fb not found: ID does not exist" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.876660 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.876697 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qs2q\" (UniqueName: \"kubernetes.io/projected/e73608ae-190a-4c12-8bd1-d14a5d898e97-kube-api-access-9qs2q\") on node \"crc\" DevicePath \"\"" Nov 21 20:57:01 crc kubenswrapper[4727]: I1121 20:57:01.876712 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e73608ae-190a-4c12-8bd1-d14a5d898e97-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:57:02 crc kubenswrapper[4727]: I1121 20:57:02.037984 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pr699"] Nov 21 20:57:02 crc kubenswrapper[4727]: I1121 20:57:02.046974 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pr699"] Nov 21 20:57:03 crc kubenswrapper[4727]: I1121 20:57:03.515709 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e73608ae-190a-4c12-8bd1-d14a5d898e97" path="/var/lib/kubelet/pods/e73608ae-190a-4c12-8bd1-d14a5d898e97/volumes" Nov 21 20:57:08 crc kubenswrapper[4727]: I1121 20:57:08.499235 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:57:08 crc kubenswrapper[4727]: E1121 20:57:08.500062 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:57:21 crc kubenswrapper[4727]: I1121 20:57:21.500147 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:57:21 crc kubenswrapper[4727]: E1121 20:57:21.500912 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:57:36 crc kubenswrapper[4727]: I1121 20:57:36.500000 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:57:36 crc kubenswrapper[4727]: E1121 20:57:36.500807 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:57:47 crc kubenswrapper[4727]: I1121 20:57:47.499001 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:57:47 crc kubenswrapper[4727]: E1121 20:57:47.500778 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.614319 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z72g9"] Nov 21 20:57:48 crc kubenswrapper[4727]: E1121 20:57:48.615198 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerName="extract-content" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.615218 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerName="extract-content" Nov 21 20:57:48 crc kubenswrapper[4727]: E1121 20:57:48.615404 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerName="extract-utilities" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.615412 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerName="extract-utilities" Nov 21 20:57:48 crc kubenswrapper[4727]: E1121 20:57:48.615432 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerName="registry-server" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.615438 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerName="registry-server" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.615673 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73608ae-190a-4c12-8bd1-d14a5d898e97" containerName="registry-server" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.617649 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.651818 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z72g9"] Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.743459 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-catalog-content\") pod \"redhat-operators-z72g9\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.743525 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dqxh\" (UniqueName: \"kubernetes.io/projected/33edfef4-a019-4905-a6c7-f317cc636162-kube-api-access-4dqxh\") pod \"redhat-operators-z72g9\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.743936 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-utilities\") pod \"redhat-operators-z72g9\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.845912 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-utilities\") pod \"redhat-operators-z72g9\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.846136 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-catalog-content\") pod \"redhat-operators-z72g9\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.846185 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dqxh\" (UniqueName: \"kubernetes.io/projected/33edfef4-a019-4905-a6c7-f317cc636162-kube-api-access-4dqxh\") pod \"redhat-operators-z72g9\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.846557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-utilities\") pod \"redhat-operators-z72g9\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.846560 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-catalog-content\") pod \"redhat-operators-z72g9\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.868090 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dqxh\" (UniqueName: \"kubernetes.io/projected/33edfef4-a019-4905-a6c7-f317cc636162-kube-api-access-4dqxh\") pod \"redhat-operators-z72g9\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:48 crc kubenswrapper[4727]: I1121 20:57:48.949828 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:49 crc kubenswrapper[4727]: I1121 20:57:49.447866 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z72g9"] Nov 21 20:57:50 crc kubenswrapper[4727]: I1121 20:57:50.239060 4727 generic.go:334] "Generic (PLEG): container finished" podID="33edfef4-a019-4905-a6c7-f317cc636162" containerID="8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1" exitCode=0 Nov 21 20:57:50 crc kubenswrapper[4727]: I1121 20:57:50.239234 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z72g9" event={"ID":"33edfef4-a019-4905-a6c7-f317cc636162","Type":"ContainerDied","Data":"8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1"} Nov 21 20:57:50 crc kubenswrapper[4727]: I1121 20:57:50.239586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z72g9" event={"ID":"33edfef4-a019-4905-a6c7-f317cc636162","Type":"ContainerStarted","Data":"e3a6f97c962d876ce338647dc50d6261db17304ac8c6832a03931182f204a2fb"} Nov 21 20:57:51 crc kubenswrapper[4727]: I1121 20:57:51.257197 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z72g9" event={"ID":"33edfef4-a019-4905-a6c7-f317cc636162","Type":"ContainerStarted","Data":"ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91"} Nov 21 20:57:55 crc kubenswrapper[4727]: I1121 20:57:55.302383 4727 generic.go:334] "Generic (PLEG): container finished" podID="33edfef4-a019-4905-a6c7-f317cc636162" containerID="ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91" exitCode=0 Nov 21 20:57:55 crc kubenswrapper[4727]: I1121 20:57:55.302444 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z72g9" event={"ID":"33edfef4-a019-4905-a6c7-f317cc636162","Type":"ContainerDied","Data":"ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91"} Nov 21 20:57:56 crc kubenswrapper[4727]: I1121 20:57:56.315714 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z72g9" event={"ID":"33edfef4-a019-4905-a6c7-f317cc636162","Type":"ContainerStarted","Data":"5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163"} Nov 21 20:57:56 crc kubenswrapper[4727]: I1121 20:57:56.344738 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z72g9" podStartSLOduration=2.7379752870000003 podStartE2EDuration="8.344719299s" podCreationTimestamp="2025-11-21 20:57:48 +0000 UTC" firstStartedPulling="2025-11-21 20:57:50.242581313 +0000 UTC m=+3075.428766367" lastFinishedPulling="2025-11-21 20:57:55.849325335 +0000 UTC m=+3081.035510379" observedRunningTime="2025-11-21 20:57:56.335297092 +0000 UTC m=+3081.521482136" watchObservedRunningTime="2025-11-21 20:57:56.344719299 +0000 UTC m=+3081.530904343" Nov 21 20:57:58 crc kubenswrapper[4727]: I1121 20:57:58.950627 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:57:58 crc kubenswrapper[4727]: I1121 20:57:58.950880 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:58:00 crc kubenswrapper[4727]: I1121 20:57:59.999495 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z72g9" podUID="33edfef4-a019-4905-a6c7-f317cc636162" containerName="registry-server" probeResult="failure" output=< Nov 21 20:58:00 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 20:58:00 crc kubenswrapper[4727]: > Nov 21 20:58:02 crc kubenswrapper[4727]: I1121 20:58:02.499375 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:58:02 crc kubenswrapper[4727]: E1121 20:58:02.500427 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:58:08 crc kubenswrapper[4727]: I1121 20:58:08.997733 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:58:09 crc kubenswrapper[4727]: I1121 20:58:09.043439 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:58:09 crc kubenswrapper[4727]: I1121 20:58:09.231840 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z72g9"] Nov 21 20:58:10 crc kubenswrapper[4727]: I1121 20:58:10.474228 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z72g9" podUID="33edfef4-a019-4905-a6c7-f317cc636162" containerName="registry-server" containerID="cri-o://5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163" gracePeriod=2 Nov 21 20:58:10 crc kubenswrapper[4727]: I1121 20:58:10.973050 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.017134 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-utilities\") pod \"33edfef4-a019-4905-a6c7-f317cc636162\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.017296 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-catalog-content\") pod \"33edfef4-a019-4905-a6c7-f317cc636162\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.018030 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-utilities" (OuterVolumeSpecName: "utilities") pod "33edfef4-a019-4905-a6c7-f317cc636162" (UID: "33edfef4-a019-4905-a6c7-f317cc636162"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.110508 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33edfef4-a019-4905-a6c7-f317cc636162" (UID: "33edfef4-a019-4905-a6c7-f317cc636162"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.119856 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dqxh\" (UniqueName: \"kubernetes.io/projected/33edfef4-a019-4905-a6c7-f317cc636162-kube-api-access-4dqxh\") pod \"33edfef4-a019-4905-a6c7-f317cc636162\" (UID: \"33edfef4-a019-4905-a6c7-f317cc636162\") " Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.120551 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.120571 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33edfef4-a019-4905-a6c7-f317cc636162-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.125841 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33edfef4-a019-4905-a6c7-f317cc636162-kube-api-access-4dqxh" (OuterVolumeSpecName: "kube-api-access-4dqxh") pod "33edfef4-a019-4905-a6c7-f317cc636162" (UID: "33edfef4-a019-4905-a6c7-f317cc636162"). InnerVolumeSpecName "kube-api-access-4dqxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.222540 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dqxh\" (UniqueName: \"kubernetes.io/projected/33edfef4-a019-4905-a6c7-f317cc636162-kube-api-access-4dqxh\") on node \"crc\" DevicePath \"\"" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.486720 4727 generic.go:334] "Generic (PLEG): container finished" podID="33edfef4-a019-4905-a6c7-f317cc636162" containerID="5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163" exitCode=0 Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.486782 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z72g9" event={"ID":"33edfef4-a019-4905-a6c7-f317cc636162","Type":"ContainerDied","Data":"5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163"} Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.487060 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z72g9" event={"ID":"33edfef4-a019-4905-a6c7-f317cc636162","Type":"ContainerDied","Data":"e3a6f97c962d876ce338647dc50d6261db17304ac8c6832a03931182f204a2fb"} Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.487083 4727 scope.go:117] "RemoveContainer" containerID="5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.486802 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z72g9" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.529628 4727 scope.go:117] "RemoveContainer" containerID="ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.544273 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z72g9"] Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.562597 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z72g9"] Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.565515 4727 scope.go:117] "RemoveContainer" containerID="8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.633732 4727 scope.go:117] "RemoveContainer" containerID="5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163" Nov 21 20:58:11 crc kubenswrapper[4727]: E1121 20:58:11.634681 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163\": container with ID starting with 5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163 not found: ID does not exist" containerID="5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.634707 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163"} err="failed to get container status \"5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163\": rpc error: code = NotFound desc = could not find container \"5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163\": container with ID starting with 5df2044634e9f8a2927a02273a74fa8c114cd045cb0d0490b3801f462b727163 not found: ID does not exist" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.634728 4727 scope.go:117] "RemoveContainer" containerID="ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91" Nov 21 20:58:11 crc kubenswrapper[4727]: E1121 20:58:11.635174 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91\": container with ID starting with ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91 not found: ID does not exist" containerID="ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.635194 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91"} err="failed to get container status \"ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91\": rpc error: code = NotFound desc = could not find container \"ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91\": container with ID starting with ce193a6522c2d5466082cfd4d3f0a8b5d5cdf387ef7494d953c849e8a1f71e91 not found: ID does not exist" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.635206 4727 scope.go:117] "RemoveContainer" containerID="8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1" Nov 21 20:58:11 crc kubenswrapper[4727]: E1121 20:58:11.635709 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1\": container with ID starting with 8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1 not found: ID does not exist" containerID="8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1" Nov 21 20:58:11 crc kubenswrapper[4727]: I1121 20:58:11.635752 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1"} err="failed to get container status \"8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1\": rpc error: code = NotFound desc = could not find container \"8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1\": container with ID starting with 8e15a35760819f1b40302dfcd81bb40b34478c5e36077ef9b27ce1a164288cb1 not found: ID does not exist" Nov 21 20:58:13 crc kubenswrapper[4727]: I1121 20:58:13.499200 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:58:13 crc kubenswrapper[4727]: E1121 20:58:13.499809 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:58:13 crc kubenswrapper[4727]: I1121 20:58:13.521309 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33edfef4-a019-4905-a6c7-f317cc636162" path="/var/lib/kubelet/pods/33edfef4-a019-4905-a6c7-f317cc636162/volumes" Nov 21 20:58:28 crc kubenswrapper[4727]: I1121 20:58:28.499408 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:58:28 crc kubenswrapper[4727]: E1121 20:58:28.500810 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:58:42 crc kubenswrapper[4727]: I1121 20:58:42.499689 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:58:42 crc kubenswrapper[4727]: E1121 20:58:42.500500 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:58:54 crc kubenswrapper[4727]: I1121 20:58:54.499771 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:58:54 crc kubenswrapper[4727]: E1121 20:58:54.500880 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:59:07 crc kubenswrapper[4727]: I1121 20:59:07.501325 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:59:07 crc kubenswrapper[4727]: E1121 20:59:07.502788 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:59:20 crc kubenswrapper[4727]: I1121 20:59:20.499924 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:59:20 crc kubenswrapper[4727]: E1121 20:59:20.501722 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:59:32 crc kubenswrapper[4727]: I1121 20:59:32.502594 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:59:32 crc kubenswrapper[4727]: E1121 20:59:32.504359 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 20:59:45 crc kubenswrapper[4727]: I1121 20:59:45.510545 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 20:59:45 crc kubenswrapper[4727]: E1121 20:59:45.511717 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.172004 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds"] Nov 21 21:00:00 crc kubenswrapper[4727]: E1121 21:00:00.173094 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33edfef4-a019-4905-a6c7-f317cc636162" containerName="extract-content" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.173109 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33edfef4-a019-4905-a6c7-f317cc636162" containerName="extract-content" Nov 21 21:00:00 crc kubenswrapper[4727]: E1121 21:00:00.173140 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33edfef4-a019-4905-a6c7-f317cc636162" containerName="registry-server" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.173146 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33edfef4-a019-4905-a6c7-f317cc636162" containerName="registry-server" Nov 21 21:00:00 crc kubenswrapper[4727]: E1121 21:00:00.173178 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33edfef4-a019-4905-a6c7-f317cc636162" containerName="extract-utilities" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.173185 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33edfef4-a019-4905-a6c7-f317cc636162" containerName="extract-utilities" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.173424 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33edfef4-a019-4905-a6c7-f317cc636162" containerName="registry-server" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.174216 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.177336 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.177358 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.189676 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds"] Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.309146 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbcl2\" (UniqueName: \"kubernetes.io/projected/a6b39516-c80b-4586-ab1d-e97af786d86a-kube-api-access-mbcl2\") pod \"collect-profiles-29395980-pbpds\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.309308 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6b39516-c80b-4586-ab1d-e97af786d86a-config-volume\") pod \"collect-profiles-29395980-pbpds\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.309378 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6b39516-c80b-4586-ab1d-e97af786d86a-secret-volume\") pod \"collect-profiles-29395980-pbpds\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.411375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6b39516-c80b-4586-ab1d-e97af786d86a-secret-volume\") pod \"collect-profiles-29395980-pbpds\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.411527 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbcl2\" (UniqueName: \"kubernetes.io/projected/a6b39516-c80b-4586-ab1d-e97af786d86a-kube-api-access-mbcl2\") pod \"collect-profiles-29395980-pbpds\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.411666 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6b39516-c80b-4586-ab1d-e97af786d86a-config-volume\") pod \"collect-profiles-29395980-pbpds\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.412560 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6b39516-c80b-4586-ab1d-e97af786d86a-config-volume\") pod \"collect-profiles-29395980-pbpds\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.425707 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6b39516-c80b-4586-ab1d-e97af786d86a-secret-volume\") pod \"collect-profiles-29395980-pbpds\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.433031 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbcl2\" (UniqueName: \"kubernetes.io/projected/a6b39516-c80b-4586-ab1d-e97af786d86a-kube-api-access-mbcl2\") pod \"collect-profiles-29395980-pbpds\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.499337 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 21:00:00 crc kubenswrapper[4727]: E1121 21:00:00.499660 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.503602 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:00 crc kubenswrapper[4727]: I1121 21:00:00.968638 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds"] Nov 21 21:00:01 crc kubenswrapper[4727]: I1121 21:00:01.734597 4727 generic.go:334] "Generic (PLEG): container finished" podID="a6b39516-c80b-4586-ab1d-e97af786d86a" containerID="24173bc3534416e5d6c0f960bbd252cb3955b5209a7add7d6695535a375d79e1" exitCode=0 Nov 21 21:00:01 crc kubenswrapper[4727]: I1121 21:00:01.734747 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" event={"ID":"a6b39516-c80b-4586-ab1d-e97af786d86a","Type":"ContainerDied","Data":"24173bc3534416e5d6c0f960bbd252cb3955b5209a7add7d6695535a375d79e1"} Nov 21 21:00:01 crc kubenswrapper[4727]: I1121 21:00:01.734901 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" event={"ID":"a6b39516-c80b-4586-ab1d-e97af786d86a","Type":"ContainerStarted","Data":"5bff06b94c294d98e9657427db9940a25ac2176356cde11a3f8d18157345576c"} Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.161319 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.272318 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbcl2\" (UniqueName: \"kubernetes.io/projected/a6b39516-c80b-4586-ab1d-e97af786d86a-kube-api-access-mbcl2\") pod \"a6b39516-c80b-4586-ab1d-e97af786d86a\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.272440 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6b39516-c80b-4586-ab1d-e97af786d86a-secret-volume\") pod \"a6b39516-c80b-4586-ab1d-e97af786d86a\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.272562 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6b39516-c80b-4586-ab1d-e97af786d86a-config-volume\") pod \"a6b39516-c80b-4586-ab1d-e97af786d86a\" (UID: \"a6b39516-c80b-4586-ab1d-e97af786d86a\") " Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.273162 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b39516-c80b-4586-ab1d-e97af786d86a-config-volume" (OuterVolumeSpecName: "config-volume") pod "a6b39516-c80b-4586-ab1d-e97af786d86a" (UID: "a6b39516-c80b-4586-ab1d-e97af786d86a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.273441 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6b39516-c80b-4586-ab1d-e97af786d86a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.278443 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b39516-c80b-4586-ab1d-e97af786d86a-kube-api-access-mbcl2" (OuterVolumeSpecName: "kube-api-access-mbcl2") pod "a6b39516-c80b-4586-ab1d-e97af786d86a" (UID: "a6b39516-c80b-4586-ab1d-e97af786d86a"). InnerVolumeSpecName "kube-api-access-mbcl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.278557 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b39516-c80b-4586-ab1d-e97af786d86a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a6b39516-c80b-4586-ab1d-e97af786d86a" (UID: "a6b39516-c80b-4586-ab1d-e97af786d86a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.375827 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbcl2\" (UniqueName: \"kubernetes.io/projected/a6b39516-c80b-4586-ab1d-e97af786d86a-kube-api-access-mbcl2\") on node \"crc\" DevicePath \"\"" Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.375868 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6b39516-c80b-4586-ab1d-e97af786d86a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.759971 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" event={"ID":"a6b39516-c80b-4586-ab1d-e97af786d86a","Type":"ContainerDied","Data":"5bff06b94c294d98e9657427db9940a25ac2176356cde11a3f8d18157345576c"} Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.760014 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bff06b94c294d98e9657427db9940a25ac2176356cde11a3f8d18157345576c" Nov 21 21:00:03 crc kubenswrapper[4727]: I1121 21:00:03.760040 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds" Nov 21 21:00:04 crc kubenswrapper[4727]: I1121 21:00:04.260190 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp"] Nov 21 21:00:04 crc kubenswrapper[4727]: I1121 21:00:04.275603 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395935-8m9mp"] Nov 21 21:00:05 crc kubenswrapper[4727]: I1121 21:00:05.517104 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455875e8-9e5e-4129-b084-a4f48b8def31" path="/var/lib/kubelet/pods/455875e8-9e5e-4129-b084-a4f48b8def31/volumes" Nov 21 21:00:15 crc kubenswrapper[4727]: I1121 21:00:15.506132 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 21:00:15 crc kubenswrapper[4727]: E1121 21:00:15.507017 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:00:30 crc kubenswrapper[4727]: I1121 21:00:30.499766 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 21:00:30 crc kubenswrapper[4727]: E1121 21:00:30.507103 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:00:41 crc kubenswrapper[4727]: I1121 21:00:41.499405 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 21:00:41 crc kubenswrapper[4727]: E1121 21:00:41.500180 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:00:41 crc kubenswrapper[4727]: E1121 21:00:41.966673 4727 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:58802->38.102.83.179:43311: write tcp 38.102.83.179:58802->38.102.83.179:43311: write: broken pipe Nov 21 21:00:47 crc kubenswrapper[4727]: I1121 21:00:47.599587 4727 scope.go:117] "RemoveContainer" containerID="69f571e50f603390842540bc0b42f1436667183bdf982d2d4f86c787b0fe2bc0" Nov 21 21:00:53 crc kubenswrapper[4727]: I1121 21:00:53.499773 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 21:00:53 crc kubenswrapper[4727]: E1121 21:00:53.500845 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.150739 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29395981-6csjp"] Nov 21 21:01:00 crc kubenswrapper[4727]: E1121 21:01:00.151619 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b39516-c80b-4586-ab1d-e97af786d86a" containerName="collect-profiles" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.151633 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b39516-c80b-4586-ab1d-e97af786d86a" containerName="collect-profiles" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.151934 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b39516-c80b-4586-ab1d-e97af786d86a" containerName="collect-profiles" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.152923 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.169174 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395981-6csjp"] Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.278491 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phj48\" (UniqueName: \"kubernetes.io/projected/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-kube-api-access-phj48\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.278587 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-fernet-keys\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.278793 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-config-data\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.279286 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-combined-ca-bundle\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.381746 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-combined-ca-bundle\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.381880 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phj48\" (UniqueName: \"kubernetes.io/projected/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-kube-api-access-phj48\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.381929 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-fernet-keys\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.381986 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-config-data\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.389193 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-config-data\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.389209 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-fernet-keys\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.389250 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-combined-ca-bundle\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.410715 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phj48\" (UniqueName: \"kubernetes.io/projected/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-kube-api-access-phj48\") pod \"keystone-cron-29395981-6csjp\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.485059 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:00 crc kubenswrapper[4727]: I1121 21:01:00.959535 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395981-6csjp"] Nov 21 21:01:01 crc kubenswrapper[4727]: I1121 21:01:01.427441 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395981-6csjp" event={"ID":"e9fdb78b-ace5-4ba0-a791-deab9b88bc05","Type":"ContainerStarted","Data":"bb55230faefe8846ceadd3cc0d95980ba325e8a9d6cc5c8bb44fff6d7bc66256"} Nov 21 21:01:01 crc kubenswrapper[4727]: I1121 21:01:01.427896 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395981-6csjp" event={"ID":"e9fdb78b-ace5-4ba0-a791-deab9b88bc05","Type":"ContainerStarted","Data":"014b9e2c36be67abed31ce5b072dedfd89aaa476ea734c42d46dc91839e76689"} Nov 21 21:01:01 crc kubenswrapper[4727]: I1121 21:01:01.445595 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29395981-6csjp" podStartSLOduration=1.4455753439999999 podStartE2EDuration="1.445575344s" podCreationTimestamp="2025-11-21 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 21:01:01.442562242 +0000 UTC m=+3266.628747296" watchObservedRunningTime="2025-11-21 21:01:01.445575344 +0000 UTC m=+3266.631760378" Nov 21 21:01:04 crc kubenswrapper[4727]: I1121 21:01:04.469433 4727 generic.go:334] "Generic (PLEG): container finished" podID="e9fdb78b-ace5-4ba0-a791-deab9b88bc05" containerID="bb55230faefe8846ceadd3cc0d95980ba325e8a9d6cc5c8bb44fff6d7bc66256" exitCode=0 Nov 21 21:01:04 crc kubenswrapper[4727]: I1121 21:01:04.469560 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395981-6csjp" event={"ID":"e9fdb78b-ace5-4ba0-a791-deab9b88bc05","Type":"ContainerDied","Data":"bb55230faefe8846ceadd3cc0d95980ba325e8a9d6cc5c8bb44fff6d7bc66256"} Nov 21 21:01:05 crc kubenswrapper[4727]: I1121 21:01:05.511033 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 21:01:05 crc kubenswrapper[4727]: E1121 21:01:05.511801 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:01:05 crc kubenswrapper[4727]: I1121 21:01:05.907617 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.035952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-config-data\") pod \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.036491 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phj48\" (UniqueName: \"kubernetes.io/projected/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-kube-api-access-phj48\") pod \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.036677 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-combined-ca-bundle\") pod \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.036867 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-fernet-keys\") pod \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\" (UID: \"e9fdb78b-ace5-4ba0-a791-deab9b88bc05\") " Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.041747 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-kube-api-access-phj48" (OuterVolumeSpecName: "kube-api-access-phj48") pod "e9fdb78b-ace5-4ba0-a791-deab9b88bc05" (UID: "e9fdb78b-ace5-4ba0-a791-deab9b88bc05"). InnerVolumeSpecName "kube-api-access-phj48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.042269 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e9fdb78b-ace5-4ba0-a791-deab9b88bc05" (UID: "e9fdb78b-ace5-4ba0-a791-deab9b88bc05"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.068018 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9fdb78b-ace5-4ba0-a791-deab9b88bc05" (UID: "e9fdb78b-ace5-4ba0-a791-deab9b88bc05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.109232 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-config-data" (OuterVolumeSpecName: "config-data") pod "e9fdb78b-ace5-4ba0-a791-deab9b88bc05" (UID: "e9fdb78b-ace5-4ba0-a791-deab9b88bc05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.140312 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.140356 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.140371 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phj48\" (UniqueName: \"kubernetes.io/projected/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-kube-api-access-phj48\") on node \"crc\" DevicePath \"\"" Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.140384 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fdb78b-ace5-4ba0-a791-deab9b88bc05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.496016 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395981-6csjp" event={"ID":"e9fdb78b-ace5-4ba0-a791-deab9b88bc05","Type":"ContainerDied","Data":"014b9e2c36be67abed31ce5b072dedfd89aaa476ea734c42d46dc91839e76689"} Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.496058 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="014b9e2c36be67abed31ce5b072dedfd89aaa476ea734c42d46dc91839e76689" Nov 21 21:01:06 crc kubenswrapper[4727]: I1121 21:01:06.496085 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395981-6csjp" Nov 21 21:01:19 crc kubenswrapper[4727]: I1121 21:01:19.499674 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 21:01:20 crc kubenswrapper[4727]: I1121 21:01:20.663110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"76b34fbdc61da3eb45229b74b5fafab538e0c716c0c7ecd663bd2b1ee4f087d2"} Nov 21 21:02:51 crc kubenswrapper[4727]: E1121 21:02:51.745708 4727 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:48458->38.102.83.179:43311: write tcp 38.102.83.179:48458->38.102.83.179:43311: write: broken pipe Nov 21 21:03:43 crc kubenswrapper[4727]: I1121 21:03:43.335511 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:03:43 crc kubenswrapper[4727]: I1121 21:03:43.336187 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:04:13 crc kubenswrapper[4727]: I1121 21:04:13.335190 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:04:13 crc kubenswrapper[4727]: I1121 21:04:13.336238 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:04:43 crc kubenswrapper[4727]: I1121 21:04:43.335881 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:04:43 crc kubenswrapper[4727]: I1121 21:04:43.337085 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:04:43 crc kubenswrapper[4727]: I1121 21:04:43.337178 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 21:04:43 crc kubenswrapper[4727]: I1121 21:04:43.338714 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76b34fbdc61da3eb45229b74b5fafab538e0c716c0c7ecd663bd2b1ee4f087d2"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 21:04:43 crc kubenswrapper[4727]: I1121 21:04:43.338816 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://76b34fbdc61da3eb45229b74b5fafab538e0c716c0c7ecd663bd2b1ee4f087d2" gracePeriod=600 Nov 21 21:04:44 crc kubenswrapper[4727]: I1121 21:04:44.142470 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="76b34fbdc61da3eb45229b74b5fafab538e0c716c0c7ecd663bd2b1ee4f087d2" exitCode=0 Nov 21 21:04:44 crc kubenswrapper[4727]: I1121 21:04:44.142559 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"76b34fbdc61da3eb45229b74b5fafab538e0c716c0c7ecd663bd2b1ee4f087d2"} Nov 21 21:04:44 crc kubenswrapper[4727]: I1121 21:04:44.143003 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea"} Nov 21 21:04:44 crc kubenswrapper[4727]: I1121 21:04:44.143037 4727 scope.go:117] "RemoveContainer" containerID="d44f8929ab437bc608c3191fbcd6c830600c599b713b271401f02377130c5307" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.532724 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dsw7n"] Nov 21 21:05:02 crc kubenswrapper[4727]: E1121 21:05:02.535039 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9fdb78b-ace5-4ba0-a791-deab9b88bc05" containerName="keystone-cron" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.535064 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9fdb78b-ace5-4ba0-a791-deab9b88bc05" containerName="keystone-cron" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.535415 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9fdb78b-ace5-4ba0-a791-deab9b88bc05" containerName="keystone-cron" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.538456 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.554702 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsw7n"] Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.651360 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-utilities\") pod \"redhat-marketplace-dsw7n\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.651678 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-catalog-content\") pod \"redhat-marketplace-dsw7n\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.651720 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bsd8\" (UniqueName: \"kubernetes.io/projected/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-kube-api-access-5bsd8\") pod \"redhat-marketplace-dsw7n\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.754936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-utilities\") pod \"redhat-marketplace-dsw7n\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.755058 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-catalog-content\") pod \"redhat-marketplace-dsw7n\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.755089 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bsd8\" (UniqueName: \"kubernetes.io/projected/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-kube-api-access-5bsd8\") pod \"redhat-marketplace-dsw7n\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.756044 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-utilities\") pod \"redhat-marketplace-dsw7n\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.756181 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-catalog-content\") pod \"redhat-marketplace-dsw7n\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.782732 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bsd8\" (UniqueName: \"kubernetes.io/projected/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-kube-api-access-5bsd8\") pod \"redhat-marketplace-dsw7n\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:02 crc kubenswrapper[4727]: I1121 21:05:02.870301 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:03 crc kubenswrapper[4727]: I1121 21:05:03.437560 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsw7n"] Nov 21 21:05:03 crc kubenswrapper[4727]: W1121 21:05:03.440802 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1de99b5_79b7_45c9_9fb1_11fc8fa0c49b.slice/crio-6d24f7d5392dae7b05fc00f5d54578b6e57172d721ad6926594ca9b078623fa1 WatchSource:0}: Error finding container 6d24f7d5392dae7b05fc00f5d54578b6e57172d721ad6926594ca9b078623fa1: Status 404 returned error can't find the container with id 6d24f7d5392dae7b05fc00f5d54578b6e57172d721ad6926594ca9b078623fa1 Nov 21 21:05:04 crc kubenswrapper[4727]: I1121 21:05:04.467014 4727 generic.go:334] "Generic (PLEG): container finished" podID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerID="222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415" exitCode=0 Nov 21 21:05:04 crc kubenswrapper[4727]: I1121 21:05:04.467142 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsw7n" event={"ID":"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b","Type":"ContainerDied","Data":"222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415"} Nov 21 21:05:04 crc kubenswrapper[4727]: I1121 21:05:04.468111 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsw7n" event={"ID":"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b","Type":"ContainerStarted","Data":"6d24f7d5392dae7b05fc00f5d54578b6e57172d721ad6926594ca9b078623fa1"} Nov 21 21:05:04 crc kubenswrapper[4727]: I1121 21:05:04.473114 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 21:05:06 crc kubenswrapper[4727]: I1121 21:05:06.512026 4727 generic.go:334] "Generic (PLEG): container finished" podID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerID="2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6" exitCode=0 Nov 21 21:05:06 crc kubenswrapper[4727]: I1121 21:05:06.512108 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsw7n" event={"ID":"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b","Type":"ContainerDied","Data":"2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6"} Nov 21 21:05:07 crc kubenswrapper[4727]: I1121 21:05:07.530337 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsw7n" event={"ID":"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b","Type":"ContainerStarted","Data":"2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f"} Nov 21 21:05:07 crc kubenswrapper[4727]: I1121 21:05:07.558383 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dsw7n" podStartSLOduration=3.070980078 podStartE2EDuration="5.558361086s" podCreationTimestamp="2025-11-21 21:05:02 +0000 UTC" firstStartedPulling="2025-11-21 21:05:04.472757716 +0000 UTC m=+3509.658942750" lastFinishedPulling="2025-11-21 21:05:06.960138704 +0000 UTC m=+3512.146323758" observedRunningTime="2025-11-21 21:05:07.554901914 +0000 UTC m=+3512.741086958" watchObservedRunningTime="2025-11-21 21:05:07.558361086 +0000 UTC m=+3512.744546130" Nov 21 21:05:12 crc kubenswrapper[4727]: I1121 21:05:12.870994 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:12 crc kubenswrapper[4727]: I1121 21:05:12.871757 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:12 crc kubenswrapper[4727]: I1121 21:05:12.951142 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:13 crc kubenswrapper[4727]: I1121 21:05:13.743891 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:13 crc kubenswrapper[4727]: I1121 21:05:13.832561 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsw7n"] Nov 21 21:05:15 crc kubenswrapper[4727]: I1121 21:05:15.678347 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dsw7n" podUID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerName="registry-server" containerID="cri-o://2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f" gracePeriod=2 Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.290822 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.390178 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-utilities\") pod \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.390388 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-catalog-content\") pod \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.390514 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bsd8\" (UniqueName: \"kubernetes.io/projected/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-kube-api-access-5bsd8\") pod \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\" (UID: \"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b\") " Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.392000 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-utilities" (OuterVolumeSpecName: "utilities") pod "c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" (UID: "c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.399812 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-kube-api-access-5bsd8" (OuterVolumeSpecName: "kube-api-access-5bsd8") pod "c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" (UID: "c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b"). InnerVolumeSpecName "kube-api-access-5bsd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.411224 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" (UID: "c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.495733 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.495794 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bsd8\" (UniqueName: \"kubernetes.io/projected/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-kube-api-access-5bsd8\") on node \"crc\" DevicePath \"\"" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.495812 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.696207 4727 generic.go:334] "Generic (PLEG): container finished" podID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerID="2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f" exitCode=0 Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.696300 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsw7n" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.696300 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsw7n" event={"ID":"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b","Type":"ContainerDied","Data":"2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f"} Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.696466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsw7n" event={"ID":"c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b","Type":"ContainerDied","Data":"6d24f7d5392dae7b05fc00f5d54578b6e57172d721ad6926594ca9b078623fa1"} Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.696495 4727 scope.go:117] "RemoveContainer" containerID="2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.739661 4727 scope.go:117] "RemoveContainer" containerID="2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.749255 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsw7n"] Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.762826 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsw7n"] Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.775059 4727 scope.go:117] "RemoveContainer" containerID="222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.849261 4727 scope.go:117] "RemoveContainer" containerID="2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f" Nov 21 21:05:16 crc kubenswrapper[4727]: E1121 21:05:16.850230 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f\": container with ID starting with 2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f not found: ID does not exist" containerID="2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.850291 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f"} err="failed to get container status \"2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f\": rpc error: code = NotFound desc = could not find container \"2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f\": container with ID starting with 2de247fe4d826e63b2b8932ab095bd255e49a29b48f309c9b04134fe2944f70f not found: ID does not exist" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.850334 4727 scope.go:117] "RemoveContainer" containerID="2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6" Nov 21 21:05:16 crc kubenswrapper[4727]: E1121 21:05:16.850745 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6\": container with ID starting with 2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6 not found: ID does not exist" containerID="2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.850789 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6"} err="failed to get container status \"2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6\": rpc error: code = NotFound desc = could not find container \"2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6\": container with ID starting with 2cb09f3375f4ece8060a28b62378261c3cac25fe25c849f380a790df154036a6 not found: ID does not exist" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.850824 4727 scope.go:117] "RemoveContainer" containerID="222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415" Nov 21 21:05:16 crc kubenswrapper[4727]: E1121 21:05:16.851348 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415\": container with ID starting with 222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415 not found: ID does not exist" containerID="222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415" Nov 21 21:05:16 crc kubenswrapper[4727]: I1121 21:05:16.851401 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415"} err="failed to get container status \"222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415\": rpc error: code = NotFound desc = could not find container \"222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415\": container with ID starting with 222dc2144dad9a3a0943954deec24dc4220e043c26bf0f55d0258345b95c0415 not found: ID does not exist" Nov 21 21:05:17 crc kubenswrapper[4727]: I1121 21:05:17.528366 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" path="/var/lib/kubelet/pods/c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b/volumes" Nov 21 21:06:21 crc kubenswrapper[4727]: E1121 21:06:21.896532 4727 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:55918->38.102.83.179:43311: write tcp 38.102.83.179:55918->38.102.83.179:43311: write: connection reset by peer Nov 21 21:06:43 crc kubenswrapper[4727]: I1121 21:06:43.336331 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:06:43 crc kubenswrapper[4727]: I1121 21:06:43.337004 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:07:13 crc kubenswrapper[4727]: I1121 21:07:13.336163 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:07:13 crc kubenswrapper[4727]: I1121 21:07:13.337032 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:07:43 crc kubenswrapper[4727]: I1121 21:07:43.335145 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:07:43 crc kubenswrapper[4727]: I1121 21:07:43.335765 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:07:43 crc kubenswrapper[4727]: I1121 21:07:43.335896 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 21:07:43 crc kubenswrapper[4727]: I1121 21:07:43.337096 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 21:07:43 crc kubenswrapper[4727]: I1121 21:07:43.337191 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" gracePeriod=600 Nov 21 21:07:43 crc kubenswrapper[4727]: E1121 21:07:43.468195 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:07:44 crc kubenswrapper[4727]: I1121 21:07:44.061513 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" exitCode=0 Nov 21 21:07:44 crc kubenswrapper[4727]: I1121 21:07:44.061601 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea"} Nov 21 21:07:44 crc kubenswrapper[4727]: I1121 21:07:44.061706 4727 scope.go:117] "RemoveContainer" containerID="76b34fbdc61da3eb45229b74b5fafab538e0c716c0c7ecd663bd2b1ee4f087d2" Nov 21 21:07:44 crc kubenswrapper[4727]: I1121 21:07:44.063580 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:07:44 crc kubenswrapper[4727]: E1121 21:07:44.064640 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:07:59 crc kubenswrapper[4727]: I1121 21:07:59.499862 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:07:59 crc kubenswrapper[4727]: E1121 21:07:59.501072 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:08:11 crc kubenswrapper[4727]: I1121 21:08:11.499692 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:08:11 crc kubenswrapper[4727]: E1121 21:08:11.500926 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.781164 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7trcb"] Nov 21 21:08:22 crc kubenswrapper[4727]: E1121 21:08:22.783515 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerName="extract-utilities" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.783658 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerName="extract-utilities" Nov 21 21:08:22 crc kubenswrapper[4727]: E1121 21:08:22.783741 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerName="extract-content" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.783796 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerName="extract-content" Nov 21 21:08:22 crc kubenswrapper[4727]: E1121 21:08:22.783877 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerName="registry-server" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.783936 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerName="registry-server" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.784250 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1de99b5-79b7-45c9-9fb1-11fc8fa0c49b" containerName="registry-server" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.785976 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.812120 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7trcb"] Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.869997 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdvm\" (UniqueName: \"kubernetes.io/projected/b037850f-aef2-41ee-92b5-800537baefe2-kube-api-access-htdvm\") pod \"redhat-operators-7trcb\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.870710 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-catalog-content\") pod \"redhat-operators-7trcb\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.870852 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-utilities\") pod \"redhat-operators-7trcb\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.974625 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdvm\" (UniqueName: \"kubernetes.io/projected/b037850f-aef2-41ee-92b5-800537baefe2-kube-api-access-htdvm\") pod \"redhat-operators-7trcb\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.975214 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-catalog-content\") pod \"redhat-operators-7trcb\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.975362 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-utilities\") pod \"redhat-operators-7trcb\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.975786 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-catalog-content\") pod \"redhat-operators-7trcb\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.975831 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-utilities\") pod \"redhat-operators-7trcb\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:22 crc kubenswrapper[4727]: I1121 21:08:22.995494 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdvm\" (UniqueName: \"kubernetes.io/projected/b037850f-aef2-41ee-92b5-800537baefe2-kube-api-access-htdvm\") pod \"redhat-operators-7trcb\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:23 crc kubenswrapper[4727]: I1121 21:08:23.127840 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:23 crc kubenswrapper[4727]: I1121 21:08:23.646383 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7trcb"] Nov 21 21:08:23 crc kubenswrapper[4727]: I1121 21:08:23.701018 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7trcb" event={"ID":"b037850f-aef2-41ee-92b5-800537baefe2","Type":"ContainerStarted","Data":"0d1eb8056b84056493ec3a084a1f2712bbbe1e4398d36613b73c94cfc38edb6e"} Nov 21 21:08:24 crc kubenswrapper[4727]: I1121 21:08:24.500443 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:08:24 crc kubenswrapper[4727]: E1121 21:08:24.501102 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:08:24 crc kubenswrapper[4727]: I1121 21:08:24.717281 4727 generic.go:334] "Generic (PLEG): container finished" podID="b037850f-aef2-41ee-92b5-800537baefe2" containerID="ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e" exitCode=0 Nov 21 21:08:24 crc kubenswrapper[4727]: I1121 21:08:24.717335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7trcb" event={"ID":"b037850f-aef2-41ee-92b5-800537baefe2","Type":"ContainerDied","Data":"ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e"} Nov 21 21:08:25 crc kubenswrapper[4727]: I1121 21:08:25.731926 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7trcb" event={"ID":"b037850f-aef2-41ee-92b5-800537baefe2","Type":"ContainerStarted","Data":"cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91"} Nov 21 21:08:30 crc kubenswrapper[4727]: I1121 21:08:30.806879 4727 generic.go:334] "Generic (PLEG): container finished" podID="b037850f-aef2-41ee-92b5-800537baefe2" containerID="cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91" exitCode=0 Nov 21 21:08:30 crc kubenswrapper[4727]: I1121 21:08:30.808720 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7trcb" event={"ID":"b037850f-aef2-41ee-92b5-800537baefe2","Type":"ContainerDied","Data":"cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91"} Nov 21 21:08:31 crc kubenswrapper[4727]: I1121 21:08:31.825418 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7trcb" event={"ID":"b037850f-aef2-41ee-92b5-800537baefe2","Type":"ContainerStarted","Data":"527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77"} Nov 21 21:08:31 crc kubenswrapper[4727]: I1121 21:08:31.858291 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7trcb" podStartSLOduration=3.353837242 podStartE2EDuration="9.858270854s" podCreationTimestamp="2025-11-21 21:08:22 +0000 UTC" firstStartedPulling="2025-11-21 21:08:24.723280278 +0000 UTC m=+3709.909465322" lastFinishedPulling="2025-11-21 21:08:31.22771388 +0000 UTC m=+3716.413898934" observedRunningTime="2025-11-21 21:08:31.847491805 +0000 UTC m=+3717.033676849" watchObservedRunningTime="2025-11-21 21:08:31.858270854 +0000 UTC m=+3717.044455898" Nov 21 21:08:33 crc kubenswrapper[4727]: I1121 21:08:33.127911 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:33 crc kubenswrapper[4727]: I1121 21:08:33.128248 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:34 crc kubenswrapper[4727]: I1121 21:08:34.182850 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7trcb" podUID="b037850f-aef2-41ee-92b5-800537baefe2" containerName="registry-server" probeResult="failure" output=< Nov 21 21:08:34 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 21:08:34 crc kubenswrapper[4727]: > Nov 21 21:08:36 crc kubenswrapper[4727]: I1121 21:08:36.499262 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:08:36 crc kubenswrapper[4727]: E1121 21:08:36.500309 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:08:43 crc kubenswrapper[4727]: I1121 21:08:43.225143 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:43 crc kubenswrapper[4727]: I1121 21:08:43.299605 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:43 crc kubenswrapper[4727]: I1121 21:08:43.525246 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7trcb"] Nov 21 21:08:44 crc kubenswrapper[4727]: I1121 21:08:44.984997 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7trcb" podUID="b037850f-aef2-41ee-92b5-800537baefe2" containerName="registry-server" containerID="cri-o://527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77" gracePeriod=2 Nov 21 21:08:45 crc kubenswrapper[4727]: I1121 21:08:45.611836 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:45 crc kubenswrapper[4727]: I1121 21:08:45.716791 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-utilities\") pod \"b037850f-aef2-41ee-92b5-800537baefe2\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " Nov 21 21:08:45 crc kubenswrapper[4727]: I1121 21:08:45.717159 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdvm\" (UniqueName: \"kubernetes.io/projected/b037850f-aef2-41ee-92b5-800537baefe2-kube-api-access-htdvm\") pod \"b037850f-aef2-41ee-92b5-800537baefe2\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " Nov 21 21:08:45 crc kubenswrapper[4727]: I1121 21:08:45.718999 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-utilities" (OuterVolumeSpecName: "utilities") pod "b037850f-aef2-41ee-92b5-800537baefe2" (UID: "b037850f-aef2-41ee-92b5-800537baefe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:08:45 crc kubenswrapper[4727]: I1121 21:08:45.719030 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-catalog-content\") pod \"b037850f-aef2-41ee-92b5-800537baefe2\" (UID: \"b037850f-aef2-41ee-92b5-800537baefe2\") " Nov 21 21:08:45 crc kubenswrapper[4727]: I1121 21:08:45.720693 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:08:45 crc kubenswrapper[4727]: I1121 21:08:45.732612 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b037850f-aef2-41ee-92b5-800537baefe2-kube-api-access-htdvm" (OuterVolumeSpecName: "kube-api-access-htdvm") pod "b037850f-aef2-41ee-92b5-800537baefe2" (UID: "b037850f-aef2-41ee-92b5-800537baefe2"). InnerVolumeSpecName "kube-api-access-htdvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:08:45 crc kubenswrapper[4727]: I1121 21:08:45.817202 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b037850f-aef2-41ee-92b5-800537baefe2" (UID: "b037850f-aef2-41ee-92b5-800537baefe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:08:45 crc kubenswrapper[4727]: I1121 21:08:45.823041 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htdvm\" (UniqueName: \"kubernetes.io/projected/b037850f-aef2-41ee-92b5-800537baefe2-kube-api-access-htdvm\") on node \"crc\" DevicePath \"\"" Nov 21 21:08:45 crc kubenswrapper[4727]: I1121 21:08:45.823082 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b037850f-aef2-41ee-92b5-800537baefe2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.000219 4727 generic.go:334] "Generic (PLEG): container finished" podID="b037850f-aef2-41ee-92b5-800537baefe2" containerID="527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77" exitCode=0 Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.000271 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7trcb" event={"ID":"b037850f-aef2-41ee-92b5-800537baefe2","Type":"ContainerDied","Data":"527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77"} Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.000304 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7trcb" event={"ID":"b037850f-aef2-41ee-92b5-800537baefe2","Type":"ContainerDied","Data":"0d1eb8056b84056493ec3a084a1f2712bbbe1e4398d36613b73c94cfc38edb6e"} Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.000324 4727 scope.go:117] "RemoveContainer" containerID="527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77" Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.000488 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7trcb" Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.041238 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7trcb"] Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.050863 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7trcb"] Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.061712 4727 scope.go:117] "RemoveContainer" containerID="cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91" Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.083759 4727 scope.go:117] "RemoveContainer" containerID="ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e" Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.150728 4727 scope.go:117] "RemoveContainer" containerID="527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77" Nov 21 21:08:46 crc kubenswrapper[4727]: E1121 21:08:46.151291 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77\": container with ID starting with 527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77 not found: ID does not exist" containerID="527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77" Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.151333 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77"} err="failed to get container status \"527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77\": rpc error: code = NotFound desc = could not find container \"527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77\": container with ID starting with 527f20391d02c5ac2bbc78cb3c2e2662627f0450aa6ab806f721b7b2a8874b77 not found: ID does not exist" Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.151360 4727 scope.go:117] "RemoveContainer" containerID="cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91" Nov 21 21:08:46 crc kubenswrapper[4727]: E1121 21:08:46.151655 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91\": container with ID starting with cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91 not found: ID does not exist" containerID="cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91" Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.151684 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91"} err="failed to get container status \"cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91\": rpc error: code = NotFound desc = could not find container \"cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91\": container with ID starting with cebadb1b119a2fb2847c1bc5805568d1d95f24c3285ccb1436fc4411e28bee91 not found: ID does not exist" Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.151705 4727 scope.go:117] "RemoveContainer" containerID="ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e" Nov 21 21:08:46 crc kubenswrapper[4727]: E1121 21:08:46.152006 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e\": container with ID starting with ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e not found: ID does not exist" containerID="ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e" Nov 21 21:08:46 crc kubenswrapper[4727]: I1121 21:08:46.152041 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e"} err="failed to get container status \"ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e\": rpc error: code = NotFound desc = could not find container \"ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e\": container with ID starting with ea2efd0319963b9f8aa343feb7deedba60d5713034cef5b83e6540357296616e not found: ID does not exist" Nov 21 21:08:47 crc kubenswrapper[4727]: I1121 21:08:47.499245 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:08:47 crc kubenswrapper[4727]: E1121 21:08:47.500132 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:08:47 crc kubenswrapper[4727]: I1121 21:08:47.521295 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b037850f-aef2-41ee-92b5-800537baefe2" path="/var/lib/kubelet/pods/b037850f-aef2-41ee-92b5-800537baefe2/volumes" Nov 21 21:09:02 crc kubenswrapper[4727]: I1121 21:09:02.500036 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:09:02 crc kubenswrapper[4727]: E1121 21:09:02.500838 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:09:13 crc kubenswrapper[4727]: I1121 21:09:13.500479 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:09:13 crc kubenswrapper[4727]: E1121 21:09:13.501577 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:09:27 crc kubenswrapper[4727]: I1121 21:09:27.499510 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:09:27 crc kubenswrapper[4727]: E1121 21:09:27.500316 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:09:39 crc kubenswrapper[4727]: I1121 21:09:39.500806 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:09:39 crc kubenswrapper[4727]: E1121 21:09:39.502348 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.608434 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hd255"] Nov 21 21:09:43 crc kubenswrapper[4727]: E1121 21:09:43.609769 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b037850f-aef2-41ee-92b5-800537baefe2" containerName="extract-utilities" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.609790 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b037850f-aef2-41ee-92b5-800537baefe2" containerName="extract-utilities" Nov 21 21:09:43 crc kubenswrapper[4727]: E1121 21:09:43.609811 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b037850f-aef2-41ee-92b5-800537baefe2" containerName="registry-server" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.609819 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b037850f-aef2-41ee-92b5-800537baefe2" containerName="registry-server" Nov 21 21:09:43 crc kubenswrapper[4727]: E1121 21:09:43.609835 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b037850f-aef2-41ee-92b5-800537baefe2" containerName="extract-content" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.609844 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b037850f-aef2-41ee-92b5-800537baefe2" containerName="extract-content" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.610197 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b037850f-aef2-41ee-92b5-800537baefe2" containerName="registry-server" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.612416 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.629035 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hd255"] Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.749740 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9z6\" (UniqueName: \"kubernetes.io/projected/b2854b20-61b3-4a96-b677-22c2179bcef3-kube-api-access-cs9z6\") pod \"certified-operators-hd255\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.750135 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-utilities\") pod \"certified-operators-hd255\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.751042 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-catalog-content\") pod \"certified-operators-hd255\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.854066 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9z6\" (UniqueName: \"kubernetes.io/projected/b2854b20-61b3-4a96-b677-22c2179bcef3-kube-api-access-cs9z6\") pod \"certified-operators-hd255\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.854158 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-utilities\") pod \"certified-operators-hd255\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.854351 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-catalog-content\") pod \"certified-operators-hd255\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.854705 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-utilities\") pod \"certified-operators-hd255\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.854828 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-catalog-content\") pod \"certified-operators-hd255\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.884267 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9z6\" (UniqueName: \"kubernetes.io/projected/b2854b20-61b3-4a96-b677-22c2179bcef3-kube-api-access-cs9z6\") pod \"certified-operators-hd255\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:43 crc kubenswrapper[4727]: I1121 21:09:43.945268 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:44 crc kubenswrapper[4727]: I1121 21:09:44.534176 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hd255"] Nov 21 21:09:44 crc kubenswrapper[4727]: I1121 21:09:44.807735 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd255" event={"ID":"b2854b20-61b3-4a96-b677-22c2179bcef3","Type":"ContainerStarted","Data":"e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd"} Nov 21 21:09:44 crc kubenswrapper[4727]: I1121 21:09:44.807951 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd255" event={"ID":"b2854b20-61b3-4a96-b677-22c2179bcef3","Type":"ContainerStarted","Data":"c9c964ff2e4715513e1bdef782e3db36d9d9ec32dbab019899875d7366d4d3fc"} Nov 21 21:09:45 crc kubenswrapper[4727]: E1121 21:09:45.084241 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2854b20_61b3_4a96_b677_22c2179bcef3.slice/crio-e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2854b20_61b3_4a96_b677_22c2179bcef3.slice/crio-conmon-e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd.scope\": RecentStats: unable to find data in memory cache]" Nov 21 21:09:45 crc kubenswrapper[4727]: I1121 21:09:45.825781 4727 generic.go:334] "Generic (PLEG): container finished" podID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerID="e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd" exitCode=0 Nov 21 21:09:45 crc kubenswrapper[4727]: I1121 21:09:45.826217 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd255" event={"ID":"b2854b20-61b3-4a96-b677-22c2179bcef3","Type":"ContainerDied","Data":"e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd"} Nov 21 21:09:45 crc kubenswrapper[4727]: I1121 21:09:45.994526 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pttmn"] Nov 21 21:09:45 crc kubenswrapper[4727]: I1121 21:09:45.998051 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.005253 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pttmn"] Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.137979 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g84q\" (UniqueName: \"kubernetes.io/projected/89325b75-1c44-4ca6-aa69-3174e4d6b978-kube-api-access-5g84q\") pod \"community-operators-pttmn\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.138216 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-utilities\") pod \"community-operators-pttmn\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.138289 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-catalog-content\") pod \"community-operators-pttmn\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.240840 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g84q\" (UniqueName: \"kubernetes.io/projected/89325b75-1c44-4ca6-aa69-3174e4d6b978-kube-api-access-5g84q\") pod \"community-operators-pttmn\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.241076 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-utilities\") pod \"community-operators-pttmn\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.241179 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-catalog-content\") pod \"community-operators-pttmn\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.242236 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-catalog-content\") pod \"community-operators-pttmn\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.242513 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-utilities\") pod \"community-operators-pttmn\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.262490 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g84q\" (UniqueName: \"kubernetes.io/projected/89325b75-1c44-4ca6-aa69-3174e4d6b978-kube-api-access-5g84q\") pod \"community-operators-pttmn\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.336039 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:46 crc kubenswrapper[4727]: I1121 21:09:46.921425 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pttmn"] Nov 21 21:09:47 crc kubenswrapper[4727]: I1121 21:09:47.872625 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd255" event={"ID":"b2854b20-61b3-4a96-b677-22c2179bcef3","Type":"ContainerStarted","Data":"ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0"} Nov 21 21:09:47 crc kubenswrapper[4727]: I1121 21:09:47.879825 4727 generic.go:334] "Generic (PLEG): container finished" podID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerID="151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952" exitCode=0 Nov 21 21:09:47 crc kubenswrapper[4727]: I1121 21:09:47.879868 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pttmn" event={"ID":"89325b75-1c44-4ca6-aa69-3174e4d6b978","Type":"ContainerDied","Data":"151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952"} Nov 21 21:09:47 crc kubenswrapper[4727]: I1121 21:09:47.879896 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pttmn" event={"ID":"89325b75-1c44-4ca6-aa69-3174e4d6b978","Type":"ContainerStarted","Data":"23af42791bcd5bcfc813e74a8ff3fe317e9859d9896bb8d7f806c7f36487faf7"} Nov 21 21:09:48 crc kubenswrapper[4727]: I1121 21:09:48.894998 4727 generic.go:334] "Generic (PLEG): container finished" podID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerID="ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0" exitCode=0 Nov 21 21:09:48 crc kubenswrapper[4727]: I1121 21:09:48.895066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd255" event={"ID":"b2854b20-61b3-4a96-b677-22c2179bcef3","Type":"ContainerDied","Data":"ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0"} Nov 21 21:09:48 crc kubenswrapper[4727]: I1121 21:09:48.898762 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pttmn" event={"ID":"89325b75-1c44-4ca6-aa69-3174e4d6b978","Type":"ContainerStarted","Data":"5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c"} Nov 21 21:09:49 crc kubenswrapper[4727]: I1121 21:09:49.912524 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd255" event={"ID":"b2854b20-61b3-4a96-b677-22c2179bcef3","Type":"ContainerStarted","Data":"2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d"} Nov 21 21:09:49 crc kubenswrapper[4727]: I1121 21:09:49.941824 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hd255" podStartSLOduration=3.461392277 podStartE2EDuration="6.941800025s" podCreationTimestamp="2025-11-21 21:09:43 +0000 UTC" firstStartedPulling="2025-11-21 21:09:45.82983604 +0000 UTC m=+3791.016021084" lastFinishedPulling="2025-11-21 21:09:49.310243788 +0000 UTC m=+3794.496428832" observedRunningTime="2025-11-21 21:09:49.935067983 +0000 UTC m=+3795.121253017" watchObservedRunningTime="2025-11-21 21:09:49.941800025 +0000 UTC m=+3795.127985069" Nov 21 21:09:50 crc kubenswrapper[4727]: I1121 21:09:50.929193 4727 generic.go:334] "Generic (PLEG): container finished" podID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerID="5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c" exitCode=0 Nov 21 21:09:50 crc kubenswrapper[4727]: I1121 21:09:50.929253 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pttmn" event={"ID":"89325b75-1c44-4ca6-aa69-3174e4d6b978","Type":"ContainerDied","Data":"5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c"} Nov 21 21:09:51 crc kubenswrapper[4727]: I1121 21:09:51.501903 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:09:51 crc kubenswrapper[4727]: E1121 21:09:51.502576 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:09:51 crc kubenswrapper[4727]: I1121 21:09:51.954489 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pttmn" event={"ID":"89325b75-1c44-4ca6-aa69-3174e4d6b978","Type":"ContainerStarted","Data":"95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201"} Nov 21 21:09:52 crc kubenswrapper[4727]: I1121 21:09:52.004547 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pttmn" podStartSLOduration=3.569428499 podStartE2EDuration="7.004520949s" podCreationTimestamp="2025-11-21 21:09:45 +0000 UTC" firstStartedPulling="2025-11-21 21:09:47.882235447 +0000 UTC m=+3793.068420511" lastFinishedPulling="2025-11-21 21:09:51.317327867 +0000 UTC m=+3796.503512961" observedRunningTime="2025-11-21 21:09:51.986469976 +0000 UTC m=+3797.172655020" watchObservedRunningTime="2025-11-21 21:09:52.004520949 +0000 UTC m=+3797.190705993" Nov 21 21:09:53 crc kubenswrapper[4727]: I1121 21:09:53.946440 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:53 crc kubenswrapper[4727]: I1121 21:09:53.946802 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:54 crc kubenswrapper[4727]: I1121 21:09:54.043525 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:54 crc kubenswrapper[4727]: I1121 21:09:54.104536 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:55 crc kubenswrapper[4727]: I1121 21:09:55.595901 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hd255"] Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.011506 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hd255" podUID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerName="registry-server" containerID="cri-o://2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d" gracePeriod=2 Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.336637 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.336969 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.416932 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.532305 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.639161 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-catalog-content\") pod \"b2854b20-61b3-4a96-b677-22c2179bcef3\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.639259 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs9z6\" (UniqueName: \"kubernetes.io/projected/b2854b20-61b3-4a96-b677-22c2179bcef3-kube-api-access-cs9z6\") pod \"b2854b20-61b3-4a96-b677-22c2179bcef3\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.639302 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-utilities\") pod \"b2854b20-61b3-4a96-b677-22c2179bcef3\" (UID: \"b2854b20-61b3-4a96-b677-22c2179bcef3\") " Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.640071 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-utilities" (OuterVolumeSpecName: "utilities") pod "b2854b20-61b3-4a96-b677-22c2179bcef3" (UID: "b2854b20-61b3-4a96-b677-22c2179bcef3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.640620 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.646289 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2854b20-61b3-4a96-b677-22c2179bcef3-kube-api-access-cs9z6" (OuterVolumeSpecName: "kube-api-access-cs9z6") pod "b2854b20-61b3-4a96-b677-22c2179bcef3" (UID: "b2854b20-61b3-4a96-b677-22c2179bcef3"). InnerVolumeSpecName "kube-api-access-cs9z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.687664 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2854b20-61b3-4a96-b677-22c2179bcef3" (UID: "b2854b20-61b3-4a96-b677-22c2179bcef3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.743436 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2854b20-61b3-4a96-b677-22c2179bcef3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:09:56 crc kubenswrapper[4727]: I1121 21:09:56.743487 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs9z6\" (UniqueName: \"kubernetes.io/projected/b2854b20-61b3-4a96-b677-22c2179bcef3-kube-api-access-cs9z6\") on node \"crc\" DevicePath \"\"" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.018056 4727 generic.go:334] "Generic (PLEG): container finished" podID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerID="2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d" exitCode=0 Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.018192 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd255" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.018260 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd255" event={"ID":"b2854b20-61b3-4a96-b677-22c2179bcef3","Type":"ContainerDied","Data":"2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d"} Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.018317 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd255" event={"ID":"b2854b20-61b3-4a96-b677-22c2179bcef3","Type":"ContainerDied","Data":"c9c964ff2e4715513e1bdef782e3db36d9d9ec32dbab019899875d7366d4d3fc"} Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.018344 4727 scope.go:117] "RemoveContainer" containerID="2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.047262 4727 scope.go:117] "RemoveContainer" containerID="ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.056935 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hd255"] Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.063687 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hd255"] Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.082974 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.089177 4727 scope.go:117] "RemoveContainer" containerID="e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.141380 4727 scope.go:117] "RemoveContainer" containerID="2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d" Nov 21 21:09:57 crc kubenswrapper[4727]: E1121 21:09:57.142117 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d\": container with ID starting with 2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d not found: ID does not exist" containerID="2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.142174 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d"} err="failed to get container status \"2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d\": rpc error: code = NotFound desc = could not find container \"2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d\": container with ID starting with 2a376541ce57114c42e4cb5f2f53d791894b7c50dd3305620cbf28ee9d95493d not found: ID does not exist" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.142211 4727 scope.go:117] "RemoveContainer" containerID="ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0" Nov 21 21:09:57 crc kubenswrapper[4727]: E1121 21:09:57.142638 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0\": container with ID starting with ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0 not found: ID does not exist" containerID="ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.142679 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0"} err="failed to get container status \"ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0\": rpc error: code = NotFound desc = could not find container \"ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0\": container with ID starting with ee9d01cf37d5b1c2d88722ad85d7ab2aa276454474ebff61896d791ddbf77bc0 not found: ID does not exist" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.142705 4727 scope.go:117] "RemoveContainer" containerID="e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd" Nov 21 21:09:57 crc kubenswrapper[4727]: E1121 21:09:57.143085 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd\": container with ID starting with e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd not found: ID does not exist" containerID="e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.143136 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd"} err="failed to get container status \"e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd\": rpc error: code = NotFound desc = could not find container \"e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd\": container with ID starting with e08f75b0f1e15de71f6a3900a6c506bd8bedb4359c31e529afce326d11fc03dd not found: ID does not exist" Nov 21 21:09:57 crc kubenswrapper[4727]: I1121 21:09:57.512676 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2854b20-61b3-4a96-b677-22c2179bcef3" path="/var/lib/kubelet/pods/b2854b20-61b3-4a96-b677-22c2179bcef3/volumes" Nov 21 21:09:58 crc kubenswrapper[4727]: I1121 21:09:58.788810 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pttmn"] Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.043553 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pttmn" podUID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerName="registry-server" containerID="cri-o://95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201" gracePeriod=2 Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.595951 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.716716 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-catalog-content\") pod \"89325b75-1c44-4ca6-aa69-3174e4d6b978\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.716832 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-utilities\") pod \"89325b75-1c44-4ca6-aa69-3174e4d6b978\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.716889 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g84q\" (UniqueName: \"kubernetes.io/projected/89325b75-1c44-4ca6-aa69-3174e4d6b978-kube-api-access-5g84q\") pod \"89325b75-1c44-4ca6-aa69-3174e4d6b978\" (UID: \"89325b75-1c44-4ca6-aa69-3174e4d6b978\") " Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.717814 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-utilities" (OuterVolumeSpecName: "utilities") pod "89325b75-1c44-4ca6-aa69-3174e4d6b978" (UID: "89325b75-1c44-4ca6-aa69-3174e4d6b978"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.718008 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.722888 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89325b75-1c44-4ca6-aa69-3174e4d6b978-kube-api-access-5g84q" (OuterVolumeSpecName: "kube-api-access-5g84q") pod "89325b75-1c44-4ca6-aa69-3174e4d6b978" (UID: "89325b75-1c44-4ca6-aa69-3174e4d6b978"). InnerVolumeSpecName "kube-api-access-5g84q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.775221 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89325b75-1c44-4ca6-aa69-3174e4d6b978" (UID: "89325b75-1c44-4ca6-aa69-3174e4d6b978"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.820137 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89325b75-1c44-4ca6-aa69-3174e4d6b978-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:09:59 crc kubenswrapper[4727]: I1121 21:09:59.820173 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g84q\" (UniqueName: \"kubernetes.io/projected/89325b75-1c44-4ca6-aa69-3174e4d6b978-kube-api-access-5g84q\") on node \"crc\" DevicePath \"\"" Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.057093 4727 generic.go:334] "Generic (PLEG): container finished" podID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerID="95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201" exitCode=0 Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.057161 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pttmn" Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.057182 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pttmn" event={"ID":"89325b75-1c44-4ca6-aa69-3174e4d6b978","Type":"ContainerDied","Data":"95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201"} Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.057613 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pttmn" event={"ID":"89325b75-1c44-4ca6-aa69-3174e4d6b978","Type":"ContainerDied","Data":"23af42791bcd5bcfc813e74a8ff3fe317e9859d9896bb8d7f806c7f36487faf7"} Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.057637 4727 scope.go:117] "RemoveContainer" containerID="95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201" Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.086170 4727 scope.go:117] "RemoveContainer" containerID="5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c" Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.121110 4727 scope.go:117] "RemoveContainer" containerID="151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952" Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.133122 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pttmn"] Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.141346 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pttmn"] Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.169811 4727 scope.go:117] "RemoveContainer" containerID="95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201" Nov 21 21:10:00 crc kubenswrapper[4727]: E1121 21:10:00.170360 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201\": container with ID starting with 95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201 not found: ID does not exist" containerID="95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201" Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.170411 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201"} err="failed to get container status \"95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201\": rpc error: code = NotFound desc = could not find container \"95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201\": container with ID starting with 95beb09fec0c204973ed596a687f94e7c99246d66614c85782ab78f860a27201 not found: ID does not exist" Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.170444 4727 scope.go:117] "RemoveContainer" containerID="5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c" Nov 21 21:10:00 crc kubenswrapper[4727]: E1121 21:10:00.170837 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c\": container with ID starting with 5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c not found: ID does not exist" containerID="5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c" Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.170888 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c"} err="failed to get container status \"5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c\": rpc error: code = NotFound desc = could not find container \"5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c\": container with ID starting with 5aa86c09b38c3b8da7ff62d71fad8ad5b271450cd8ee37459ddd1d74a662d88c not found: ID does not exist" Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.170922 4727 scope.go:117] "RemoveContainer" containerID="151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952" Nov 21 21:10:00 crc kubenswrapper[4727]: E1121 21:10:00.171261 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952\": container with ID starting with 151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952 not found: ID does not exist" containerID="151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952" Nov 21 21:10:00 crc kubenswrapper[4727]: I1121 21:10:00.171290 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952"} err="failed to get container status \"151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952\": rpc error: code = NotFound desc = could not find container \"151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952\": container with ID starting with 151a409e78c0117c753696103f35df9969c8af0c36fb05bd04877ddce9355952 not found: ID does not exist" Nov 21 21:10:01 crc kubenswrapper[4727]: I1121 21:10:01.518421 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89325b75-1c44-4ca6-aa69-3174e4d6b978" path="/var/lib/kubelet/pods/89325b75-1c44-4ca6-aa69-3174e4d6b978/volumes" Nov 21 21:10:06 crc kubenswrapper[4727]: I1121 21:10:06.499080 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:10:06 crc kubenswrapper[4727]: E1121 21:10:06.500183 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:10:18 crc kubenswrapper[4727]: I1121 21:10:18.500157 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:10:18 crc kubenswrapper[4727]: E1121 21:10:18.502811 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:10:32 crc kubenswrapper[4727]: I1121 21:10:32.500225 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:10:32 crc kubenswrapper[4727]: E1121 21:10:32.501916 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:10:47 crc kubenswrapper[4727]: I1121 21:10:47.515623 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:10:47 crc kubenswrapper[4727]: E1121 21:10:47.522514 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:10:59 crc kubenswrapper[4727]: I1121 21:10:59.499150 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:10:59 crc kubenswrapper[4727]: E1121 21:10:59.500002 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:11:12 crc kubenswrapper[4727]: I1121 21:11:12.500018 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:11:12 crc kubenswrapper[4727]: E1121 21:11:12.501055 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:11:24 crc kubenswrapper[4727]: I1121 21:11:24.499802 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:11:24 crc kubenswrapper[4727]: E1121 21:11:24.500955 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:11:35 crc kubenswrapper[4727]: I1121 21:11:35.515544 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:11:35 crc kubenswrapper[4727]: E1121 21:11:35.518386 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:11:46 crc kubenswrapper[4727]: I1121 21:11:46.500476 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:11:46 crc kubenswrapper[4727]: E1121 21:11:46.501416 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:11:58 crc kubenswrapper[4727]: I1121 21:11:58.501437 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:11:58 crc kubenswrapper[4727]: E1121 21:11:58.502561 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:12:11 crc kubenswrapper[4727]: I1121 21:12:11.501091 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:12:11 crc kubenswrapper[4727]: E1121 21:12:11.502860 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:12:25 crc kubenswrapper[4727]: I1121 21:12:25.519221 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:12:25 crc kubenswrapper[4727]: E1121 21:12:25.520492 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:12:39 crc kubenswrapper[4727]: I1121 21:12:39.501046 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:12:39 crc kubenswrapper[4727]: E1121 21:12:39.503510 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:12:53 crc kubenswrapper[4727]: I1121 21:12:53.500371 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:12:54 crc kubenswrapper[4727]: I1121 21:12:54.477234 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"7ae40f9e4c7fcee7dfaba722bc9ffd6452efe5dd7742306102d90362ec7cdef2"} Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.195276 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg"] Nov 21 21:15:00 crc kubenswrapper[4727]: E1121 21:15:00.196664 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerName="registry-server" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.196677 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerName="registry-server" Nov 21 21:15:00 crc kubenswrapper[4727]: E1121 21:15:00.196718 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerName="registry-server" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.196724 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerName="registry-server" Nov 21 21:15:00 crc kubenswrapper[4727]: E1121 21:15:00.196740 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerName="extract-content" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.196746 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerName="extract-content" Nov 21 21:15:00 crc kubenswrapper[4727]: E1121 21:15:00.196762 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerName="extract-utilities" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.196767 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerName="extract-utilities" Nov 21 21:15:00 crc kubenswrapper[4727]: E1121 21:15:00.196788 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerName="extract-utilities" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.196794 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerName="extract-utilities" Nov 21 21:15:00 crc kubenswrapper[4727]: E1121 21:15:00.196807 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerName="extract-content" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.196813 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerName="extract-content" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.197061 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="89325b75-1c44-4ca6-aa69-3174e4d6b978" containerName="registry-server" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.197087 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2854b20-61b3-4a96-b677-22c2179bcef3" containerName="registry-server" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.197914 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.200299 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.200629 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.211540 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg"] Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.315075 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb2nr\" (UniqueName: \"kubernetes.io/projected/d8353550-dcdf-48fb-87fb-9b708e03b58e-kube-api-access-vb2nr\") pod \"collect-profiles-29395995-wp6hg\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.315255 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8353550-dcdf-48fb-87fb-9b708e03b58e-config-volume\") pod \"collect-profiles-29395995-wp6hg\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.315351 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8353550-dcdf-48fb-87fb-9b708e03b58e-secret-volume\") pod \"collect-profiles-29395995-wp6hg\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.418043 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb2nr\" (UniqueName: \"kubernetes.io/projected/d8353550-dcdf-48fb-87fb-9b708e03b58e-kube-api-access-vb2nr\") pod \"collect-profiles-29395995-wp6hg\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.418446 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8353550-dcdf-48fb-87fb-9b708e03b58e-config-volume\") pod \"collect-profiles-29395995-wp6hg\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.418520 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8353550-dcdf-48fb-87fb-9b708e03b58e-secret-volume\") pod \"collect-profiles-29395995-wp6hg\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.419590 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8353550-dcdf-48fb-87fb-9b708e03b58e-config-volume\") pod \"collect-profiles-29395995-wp6hg\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.427627 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8353550-dcdf-48fb-87fb-9b708e03b58e-secret-volume\") pod \"collect-profiles-29395995-wp6hg\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.438168 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb2nr\" (UniqueName: \"kubernetes.io/projected/d8353550-dcdf-48fb-87fb-9b708e03b58e-kube-api-access-vb2nr\") pod \"collect-profiles-29395995-wp6hg\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:00 crc kubenswrapper[4727]: I1121 21:15:00.538303 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:01 crc kubenswrapper[4727]: I1121 21:15:01.133013 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg"] Nov 21 21:15:01 crc kubenswrapper[4727]: I1121 21:15:01.254400 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" event={"ID":"d8353550-dcdf-48fb-87fb-9b708e03b58e","Type":"ContainerStarted","Data":"159e27eec192037b180dfde96008bd0a1e6968134f918dc851edde9000e9fcc2"} Nov 21 21:15:02 crc kubenswrapper[4727]: I1121 21:15:02.272848 4727 generic.go:334] "Generic (PLEG): container finished" podID="d8353550-dcdf-48fb-87fb-9b708e03b58e" containerID="c8cb5300e8470047b2f65a78e066b53c6791b1160363909f75c0fc1e575ebc0a" exitCode=0 Nov 21 21:15:02 crc kubenswrapper[4727]: I1121 21:15:02.272922 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" event={"ID":"d8353550-dcdf-48fb-87fb-9b708e03b58e","Type":"ContainerDied","Data":"c8cb5300e8470047b2f65a78e066b53c6791b1160363909f75c0fc1e575ebc0a"} Nov 21 21:15:03 crc kubenswrapper[4727]: I1121 21:15:03.729588 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:03 crc kubenswrapper[4727]: I1121 21:15:03.822228 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8353550-dcdf-48fb-87fb-9b708e03b58e-secret-volume\") pod \"d8353550-dcdf-48fb-87fb-9b708e03b58e\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " Nov 21 21:15:03 crc kubenswrapper[4727]: I1121 21:15:03.822387 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8353550-dcdf-48fb-87fb-9b708e03b58e-config-volume\") pod \"d8353550-dcdf-48fb-87fb-9b708e03b58e\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " Nov 21 21:15:03 crc kubenswrapper[4727]: I1121 21:15:03.822444 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb2nr\" (UniqueName: \"kubernetes.io/projected/d8353550-dcdf-48fb-87fb-9b708e03b58e-kube-api-access-vb2nr\") pod \"d8353550-dcdf-48fb-87fb-9b708e03b58e\" (UID: \"d8353550-dcdf-48fb-87fb-9b708e03b58e\") " Nov 21 21:15:03 crc kubenswrapper[4727]: I1121 21:15:03.823112 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8353550-dcdf-48fb-87fb-9b708e03b58e-config-volume" (OuterVolumeSpecName: "config-volume") pod "d8353550-dcdf-48fb-87fb-9b708e03b58e" (UID: "d8353550-dcdf-48fb-87fb-9b708e03b58e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 21:15:03 crc kubenswrapper[4727]: I1121 21:15:03.823334 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8353550-dcdf-48fb-87fb-9b708e03b58e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 21:15:03 crc kubenswrapper[4727]: I1121 21:15:03.828594 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8353550-dcdf-48fb-87fb-9b708e03b58e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d8353550-dcdf-48fb-87fb-9b708e03b58e" (UID: "d8353550-dcdf-48fb-87fb-9b708e03b58e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:15:03 crc kubenswrapper[4727]: I1121 21:15:03.828827 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8353550-dcdf-48fb-87fb-9b708e03b58e-kube-api-access-vb2nr" (OuterVolumeSpecName: "kube-api-access-vb2nr") pod "d8353550-dcdf-48fb-87fb-9b708e03b58e" (UID: "d8353550-dcdf-48fb-87fb-9b708e03b58e"). InnerVolumeSpecName "kube-api-access-vb2nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:15:03 crc kubenswrapper[4727]: I1121 21:15:03.926317 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8353550-dcdf-48fb-87fb-9b708e03b58e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 21:15:03 crc kubenswrapper[4727]: I1121 21:15:03.926364 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb2nr\" (UniqueName: \"kubernetes.io/projected/d8353550-dcdf-48fb-87fb-9b708e03b58e-kube-api-access-vb2nr\") on node \"crc\" DevicePath \"\"" Nov 21 21:15:04 crc kubenswrapper[4727]: I1121 21:15:04.299571 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" event={"ID":"d8353550-dcdf-48fb-87fb-9b708e03b58e","Type":"ContainerDied","Data":"159e27eec192037b180dfde96008bd0a1e6968134f918dc851edde9000e9fcc2"} Nov 21 21:15:04 crc kubenswrapper[4727]: I1121 21:15:04.299879 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="159e27eec192037b180dfde96008bd0a1e6968134f918dc851edde9000e9fcc2" Nov 21 21:15:04 crc kubenswrapper[4727]: I1121 21:15:04.299660 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg" Nov 21 21:15:04 crc kubenswrapper[4727]: I1121 21:15:04.819628 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655"] Nov 21 21:15:04 crc kubenswrapper[4727]: I1121 21:15:04.831434 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395950-ng655"] Nov 21 21:15:05 crc kubenswrapper[4727]: I1121 21:15:05.523366 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc194da3-c45c-4bc5-a77f-3517cd806a6a" path="/var/lib/kubelet/pods/cc194da3-c45c-4bc5-a77f-3517cd806a6a/volumes" Nov 21 21:15:13 crc kubenswrapper[4727]: I1121 21:15:13.336064 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:15:13 crc kubenswrapper[4727]: I1121 21:15:13.337261 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:15:43 crc kubenswrapper[4727]: I1121 21:15:43.335964 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:15:43 crc kubenswrapper[4727]: I1121 21:15:43.337030 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:15:48 crc kubenswrapper[4727]: I1121 21:15:48.097425 4727 scope.go:117] "RemoveContainer" containerID="59679e728480fb7d7c08c5500500bb4a47e780e3a222b856cd1e2b615361ef92" Nov 21 21:16:13 crc kubenswrapper[4727]: I1121 21:16:13.335515 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:16:13 crc kubenswrapper[4727]: I1121 21:16:13.336288 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:16:13 crc kubenswrapper[4727]: I1121 21:16:13.336365 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 21:16:13 crc kubenswrapper[4727]: I1121 21:16:13.337708 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ae40f9e4c7fcee7dfaba722bc9ffd6452efe5dd7742306102d90362ec7cdef2"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 21:16:13 crc kubenswrapper[4727]: I1121 21:16:13.337828 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://7ae40f9e4c7fcee7dfaba722bc9ffd6452efe5dd7742306102d90362ec7cdef2" gracePeriod=600 Nov 21 21:16:14 crc kubenswrapper[4727]: I1121 21:16:14.335555 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="7ae40f9e4c7fcee7dfaba722bc9ffd6452efe5dd7742306102d90362ec7cdef2" exitCode=0 Nov 21 21:16:14 crc kubenswrapper[4727]: I1121 21:16:14.335630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"7ae40f9e4c7fcee7dfaba722bc9ffd6452efe5dd7742306102d90362ec7cdef2"} Nov 21 21:16:14 crc kubenswrapper[4727]: I1121 21:16:14.336208 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb"} Nov 21 21:16:14 crc kubenswrapper[4727]: I1121 21:16:14.336238 4727 scope.go:117] "RemoveContainer" containerID="988190cce55029fd82d61a3728f9a88539103123b868f6bdaa1276c71f10d2ea" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.535916 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5chwt"] Nov 21 21:16:16 crc kubenswrapper[4727]: E1121 21:16:16.538194 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8353550-dcdf-48fb-87fb-9b708e03b58e" containerName="collect-profiles" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.538214 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8353550-dcdf-48fb-87fb-9b708e03b58e" containerName="collect-profiles" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.538515 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8353550-dcdf-48fb-87fb-9b708e03b58e" containerName="collect-profiles" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.541479 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.554276 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5chwt"] Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.630119 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm92d\" (UniqueName: \"kubernetes.io/projected/366021b4-4859-49f4-863f-688c502f98b0-kube-api-access-nm92d\") pod \"redhat-marketplace-5chwt\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.630365 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-utilities\") pod \"redhat-marketplace-5chwt\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.631021 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-catalog-content\") pod \"redhat-marketplace-5chwt\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.734404 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-catalog-content\") pod \"redhat-marketplace-5chwt\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.734580 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm92d\" (UniqueName: \"kubernetes.io/projected/366021b4-4859-49f4-863f-688c502f98b0-kube-api-access-nm92d\") pod \"redhat-marketplace-5chwt\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.734749 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-utilities\") pod \"redhat-marketplace-5chwt\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.735238 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-catalog-content\") pod \"redhat-marketplace-5chwt\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.735348 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-utilities\") pod \"redhat-marketplace-5chwt\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.762595 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm92d\" (UniqueName: \"kubernetes.io/projected/366021b4-4859-49f4-863f-688c502f98b0-kube-api-access-nm92d\") pod \"redhat-marketplace-5chwt\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:16 crc kubenswrapper[4727]: I1121 21:16:16.872345 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:17 crc kubenswrapper[4727]: W1121 21:16:17.334674 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod366021b4_4859_49f4_863f_688c502f98b0.slice/crio-beb27be0800fb45d14ee6bea8effbfa3dcb8849efe879ff5f198a879771d51dd WatchSource:0}: Error finding container beb27be0800fb45d14ee6bea8effbfa3dcb8849efe879ff5f198a879771d51dd: Status 404 returned error can't find the container with id beb27be0800fb45d14ee6bea8effbfa3dcb8849efe879ff5f198a879771d51dd Nov 21 21:16:17 crc kubenswrapper[4727]: I1121 21:16:17.336804 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5chwt"] Nov 21 21:16:17 crc kubenswrapper[4727]: I1121 21:16:17.395265 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5chwt" event={"ID":"366021b4-4859-49f4-863f-688c502f98b0","Type":"ContainerStarted","Data":"beb27be0800fb45d14ee6bea8effbfa3dcb8849efe879ff5f198a879771d51dd"} Nov 21 21:16:18 crc kubenswrapper[4727]: I1121 21:16:18.419244 4727 generic.go:334] "Generic (PLEG): container finished" podID="366021b4-4859-49f4-863f-688c502f98b0" containerID="88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853" exitCode=0 Nov 21 21:16:18 crc kubenswrapper[4727]: I1121 21:16:18.419546 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5chwt" event={"ID":"366021b4-4859-49f4-863f-688c502f98b0","Type":"ContainerDied","Data":"88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853"} Nov 21 21:16:18 crc kubenswrapper[4727]: I1121 21:16:18.421996 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 21:16:19 crc kubenswrapper[4727]: I1121 21:16:19.438253 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5chwt" event={"ID":"366021b4-4859-49f4-863f-688c502f98b0","Type":"ContainerStarted","Data":"ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d"} Nov 21 21:16:20 crc kubenswrapper[4727]: I1121 21:16:20.459911 4727 generic.go:334] "Generic (PLEG): container finished" podID="366021b4-4859-49f4-863f-688c502f98b0" containerID="ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d" exitCode=0 Nov 21 21:16:20 crc kubenswrapper[4727]: I1121 21:16:20.460401 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5chwt" event={"ID":"366021b4-4859-49f4-863f-688c502f98b0","Type":"ContainerDied","Data":"ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d"} Nov 21 21:16:22 crc kubenswrapper[4727]: I1121 21:16:22.485857 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5chwt" event={"ID":"366021b4-4859-49f4-863f-688c502f98b0","Type":"ContainerStarted","Data":"1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd"} Nov 21 21:16:22 crc kubenswrapper[4727]: I1121 21:16:22.507101 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5chwt" podStartSLOduration=4.051224228 podStartE2EDuration="6.507082231s" podCreationTimestamp="2025-11-21 21:16:16 +0000 UTC" firstStartedPulling="2025-11-21 21:16:18.421774569 +0000 UTC m=+4183.607959613" lastFinishedPulling="2025-11-21 21:16:20.877632532 +0000 UTC m=+4186.063817616" observedRunningTime="2025-11-21 21:16:22.504321015 +0000 UTC m=+4187.690506099" watchObservedRunningTime="2025-11-21 21:16:22.507082231 +0000 UTC m=+4187.693267275" Nov 21 21:16:26 crc kubenswrapper[4727]: I1121 21:16:26.873274 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:26 crc kubenswrapper[4727]: I1121 21:16:26.874081 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:26 crc kubenswrapper[4727]: I1121 21:16:26.966686 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:27 crc kubenswrapper[4727]: I1121 21:16:27.640221 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:27 crc kubenswrapper[4727]: I1121 21:16:27.708995 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5chwt"] Nov 21 21:16:29 crc kubenswrapper[4727]: I1121 21:16:29.594572 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5chwt" podUID="366021b4-4859-49f4-863f-688c502f98b0" containerName="registry-server" containerID="cri-o://1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd" gracePeriod=2 Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.211857 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.330827 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm92d\" (UniqueName: \"kubernetes.io/projected/366021b4-4859-49f4-863f-688c502f98b0-kube-api-access-nm92d\") pod \"366021b4-4859-49f4-863f-688c502f98b0\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.331150 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-catalog-content\") pod \"366021b4-4859-49f4-863f-688c502f98b0\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.331266 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-utilities\") pod \"366021b4-4859-49f4-863f-688c502f98b0\" (UID: \"366021b4-4859-49f4-863f-688c502f98b0\") " Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.332771 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-utilities" (OuterVolumeSpecName: "utilities") pod "366021b4-4859-49f4-863f-688c502f98b0" (UID: "366021b4-4859-49f4-863f-688c502f98b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.340024 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366021b4-4859-49f4-863f-688c502f98b0-kube-api-access-nm92d" (OuterVolumeSpecName: "kube-api-access-nm92d") pod "366021b4-4859-49f4-863f-688c502f98b0" (UID: "366021b4-4859-49f4-863f-688c502f98b0"). InnerVolumeSpecName "kube-api-access-nm92d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.359310 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "366021b4-4859-49f4-863f-688c502f98b0" (UID: "366021b4-4859-49f4-863f-688c502f98b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.434889 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm92d\" (UniqueName: \"kubernetes.io/projected/366021b4-4859-49f4-863f-688c502f98b0-kube-api-access-nm92d\") on node \"crc\" DevicePath \"\"" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.434930 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.434971 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366021b4-4859-49f4-863f-688c502f98b0-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.613183 4727 generic.go:334] "Generic (PLEG): container finished" podID="366021b4-4859-49f4-863f-688c502f98b0" containerID="1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd" exitCode=0 Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.613235 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5chwt" event={"ID":"366021b4-4859-49f4-863f-688c502f98b0","Type":"ContainerDied","Data":"1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd"} Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.613268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5chwt" event={"ID":"366021b4-4859-49f4-863f-688c502f98b0","Type":"ContainerDied","Data":"beb27be0800fb45d14ee6bea8effbfa3dcb8849efe879ff5f198a879771d51dd"} Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.613320 4727 scope.go:117] "RemoveContainer" containerID="1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.613348 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5chwt" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.667905 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5chwt"] Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.670590 4727 scope.go:117] "RemoveContainer" containerID="ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.682312 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5chwt"] Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.711347 4727 scope.go:117] "RemoveContainer" containerID="88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.760817 4727 scope.go:117] "RemoveContainer" containerID="1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd" Nov 21 21:16:30 crc kubenswrapper[4727]: E1121 21:16:30.761485 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd\": container with ID starting with 1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd not found: ID does not exist" containerID="1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.761545 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd"} err="failed to get container status \"1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd\": rpc error: code = NotFound desc = could not find container \"1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd\": container with ID starting with 1538d10f150eeba94a52c23fba03523e5a60e2effa642449be7f48d6b2fa2ccd not found: ID does not exist" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.761590 4727 scope.go:117] "RemoveContainer" containerID="ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d" Nov 21 21:16:30 crc kubenswrapper[4727]: E1121 21:16:30.766442 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d\": container with ID starting with ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d not found: ID does not exist" containerID="ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.766518 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d"} err="failed to get container status \"ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d\": rpc error: code = NotFound desc = could not find container \"ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d\": container with ID starting with ecc7292174260b86d85a5f8c8658f8dba23e1767ee9ef77243418a4a0068404d not found: ID does not exist" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.766559 4727 scope.go:117] "RemoveContainer" containerID="88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853" Nov 21 21:16:30 crc kubenswrapper[4727]: E1121 21:16:30.767214 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853\": container with ID starting with 88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853 not found: ID does not exist" containerID="88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853" Nov 21 21:16:30 crc kubenswrapper[4727]: I1121 21:16:30.767245 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853"} err="failed to get container status \"88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853\": rpc error: code = NotFound desc = could not find container \"88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853\": container with ID starting with 88c7b11f270892d3853c5401831d03663de98cea02f39220490dbd30069b2853 not found: ID does not exist" Nov 21 21:16:31 crc kubenswrapper[4727]: I1121 21:16:31.520032 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="366021b4-4859-49f4-863f-688c502f98b0" path="/var/lib/kubelet/pods/366021b4-4859-49f4-863f-688c502f98b0/volumes" Nov 21 21:18:13 crc kubenswrapper[4727]: I1121 21:18:13.335273 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:18:13 crc kubenswrapper[4727]: I1121 21:18:13.336016 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.281757 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4zdbn"] Nov 21 21:18:36 crc kubenswrapper[4727]: E1121 21:18:36.282865 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366021b4-4859-49f4-863f-688c502f98b0" containerName="extract-utilities" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.282882 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="366021b4-4859-49f4-863f-688c502f98b0" containerName="extract-utilities" Nov 21 21:18:36 crc kubenswrapper[4727]: E1121 21:18:36.282907 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366021b4-4859-49f4-863f-688c502f98b0" containerName="extract-content" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.282915 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="366021b4-4859-49f4-863f-688c502f98b0" containerName="extract-content" Nov 21 21:18:36 crc kubenswrapper[4727]: E1121 21:18:36.282925 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366021b4-4859-49f4-863f-688c502f98b0" containerName="registry-server" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.282933 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="366021b4-4859-49f4-863f-688c502f98b0" containerName="registry-server" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.286207 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="366021b4-4859-49f4-863f-688c502f98b0" containerName="registry-server" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.289404 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.324199 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4zdbn"] Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.475428 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-utilities\") pod \"redhat-operators-4zdbn\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.476033 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-catalog-content\") pod \"redhat-operators-4zdbn\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.476492 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjv7w\" (UniqueName: \"kubernetes.io/projected/8e9991fb-bb23-44f7-962d-998cf6e827d9-kube-api-access-mjv7w\") pod \"redhat-operators-4zdbn\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.579133 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-catalog-content\") pod \"redhat-operators-4zdbn\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.579233 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjv7w\" (UniqueName: \"kubernetes.io/projected/8e9991fb-bb23-44f7-962d-998cf6e827d9-kube-api-access-mjv7w\") pod \"redhat-operators-4zdbn\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.579310 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-utilities\") pod \"redhat-operators-4zdbn\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.580205 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-catalog-content\") pod \"redhat-operators-4zdbn\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.580276 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-utilities\") pod \"redhat-operators-4zdbn\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.607811 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjv7w\" (UniqueName: \"kubernetes.io/projected/8e9991fb-bb23-44f7-962d-998cf6e827d9-kube-api-access-mjv7w\") pod \"redhat-operators-4zdbn\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:36 crc kubenswrapper[4727]: I1121 21:18:36.614037 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:37 crc kubenswrapper[4727]: I1121 21:18:37.161697 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4zdbn"] Nov 21 21:18:38 crc kubenswrapper[4727]: I1121 21:18:38.326916 4727 generic.go:334] "Generic (PLEG): container finished" podID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerID="257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0" exitCode=0 Nov 21 21:18:38 crc kubenswrapper[4727]: I1121 21:18:38.327175 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zdbn" event={"ID":"8e9991fb-bb23-44f7-962d-998cf6e827d9","Type":"ContainerDied","Data":"257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0"} Nov 21 21:18:38 crc kubenswrapper[4727]: I1121 21:18:38.327685 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zdbn" event={"ID":"8e9991fb-bb23-44f7-962d-998cf6e827d9","Type":"ContainerStarted","Data":"4fe07d9df6fe34b8615dacd30a846a448f105c8aee33cd0191c9db9c898f5740"} Nov 21 21:18:39 crc kubenswrapper[4727]: I1121 21:18:39.340466 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zdbn" event={"ID":"8e9991fb-bb23-44f7-962d-998cf6e827d9","Type":"ContainerStarted","Data":"838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6"} Nov 21 21:18:43 crc kubenswrapper[4727]: I1121 21:18:43.336155 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:18:43 crc kubenswrapper[4727]: I1121 21:18:43.337182 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:18:43 crc kubenswrapper[4727]: I1121 21:18:43.395683 4727 generic.go:334] "Generic (PLEG): container finished" podID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerID="838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6" exitCode=0 Nov 21 21:18:43 crc kubenswrapper[4727]: I1121 21:18:43.395736 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zdbn" event={"ID":"8e9991fb-bb23-44f7-962d-998cf6e827d9","Type":"ContainerDied","Data":"838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6"} Nov 21 21:18:44 crc kubenswrapper[4727]: I1121 21:18:44.412691 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zdbn" event={"ID":"8e9991fb-bb23-44f7-962d-998cf6e827d9","Type":"ContainerStarted","Data":"067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f"} Nov 21 21:18:44 crc kubenswrapper[4727]: I1121 21:18:44.445164 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4zdbn" podStartSLOduration=2.953035102 podStartE2EDuration="8.445138633s" podCreationTimestamp="2025-11-21 21:18:36 +0000 UTC" firstStartedPulling="2025-11-21 21:18:38.330790072 +0000 UTC m=+4323.516975116" lastFinishedPulling="2025-11-21 21:18:43.822893593 +0000 UTC m=+4329.009078647" observedRunningTime="2025-11-21 21:18:44.438654937 +0000 UTC m=+4329.624840011" watchObservedRunningTime="2025-11-21 21:18:44.445138633 +0000 UTC m=+4329.631323677" Nov 21 21:18:46 crc kubenswrapper[4727]: I1121 21:18:46.614175 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:46 crc kubenswrapper[4727]: I1121 21:18:46.614528 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:47 crc kubenswrapper[4727]: I1121 21:18:47.672307 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4zdbn" podUID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerName="registry-server" probeResult="failure" output=< Nov 21 21:18:47 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 21:18:47 crc kubenswrapper[4727]: > Nov 21 21:18:56 crc kubenswrapper[4727]: I1121 21:18:56.706700 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:56 crc kubenswrapper[4727]: I1121 21:18:56.789852 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:56 crc kubenswrapper[4727]: I1121 21:18:56.969463 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4zdbn"] Nov 21 21:18:58 crc kubenswrapper[4727]: I1121 21:18:58.608781 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4zdbn" podUID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerName="registry-server" containerID="cri-o://067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f" gracePeriod=2 Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.193643 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.333548 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjv7w\" (UniqueName: \"kubernetes.io/projected/8e9991fb-bb23-44f7-962d-998cf6e827d9-kube-api-access-mjv7w\") pod \"8e9991fb-bb23-44f7-962d-998cf6e827d9\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.334094 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-catalog-content\") pod \"8e9991fb-bb23-44f7-962d-998cf6e827d9\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.334335 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-utilities\") pod \"8e9991fb-bb23-44f7-962d-998cf6e827d9\" (UID: \"8e9991fb-bb23-44f7-962d-998cf6e827d9\") " Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.335587 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-utilities" (OuterVolumeSpecName: "utilities") pod "8e9991fb-bb23-44f7-962d-998cf6e827d9" (UID: "8e9991fb-bb23-44f7-962d-998cf6e827d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.340385 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9991fb-bb23-44f7-962d-998cf6e827d9-kube-api-access-mjv7w" (OuterVolumeSpecName: "kube-api-access-mjv7w") pod "8e9991fb-bb23-44f7-962d-998cf6e827d9" (UID: "8e9991fb-bb23-44f7-962d-998cf6e827d9"). InnerVolumeSpecName "kube-api-access-mjv7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.436937 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.437003 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjv7w\" (UniqueName: \"kubernetes.io/projected/8e9991fb-bb23-44f7-962d-998cf6e827d9-kube-api-access-mjv7w\") on node \"crc\" DevicePath \"\"" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.441755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e9991fb-bb23-44f7-962d-998cf6e827d9" (UID: "8e9991fb-bb23-44f7-962d-998cf6e827d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.540132 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9991fb-bb23-44f7-962d-998cf6e827d9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.620708 4727 generic.go:334] "Generic (PLEG): container finished" podID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerID="067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f" exitCode=0 Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.620765 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zdbn" event={"ID":"8e9991fb-bb23-44f7-962d-998cf6e827d9","Type":"ContainerDied","Data":"067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f"} Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.620775 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4zdbn" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.620808 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4zdbn" event={"ID":"8e9991fb-bb23-44f7-962d-998cf6e827d9","Type":"ContainerDied","Data":"4fe07d9df6fe34b8615dacd30a846a448f105c8aee33cd0191c9db9c898f5740"} Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.620835 4727 scope.go:117] "RemoveContainer" containerID="067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.646236 4727 scope.go:117] "RemoveContainer" containerID="838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.651026 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4zdbn"] Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.664350 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4zdbn"] Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.671596 4727 scope.go:117] "RemoveContainer" containerID="257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.739660 4727 scope.go:117] "RemoveContainer" containerID="067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f" Nov 21 21:18:59 crc kubenswrapper[4727]: E1121 21:18:59.740266 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f\": container with ID starting with 067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f not found: ID does not exist" containerID="067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.740327 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f"} err="failed to get container status \"067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f\": rpc error: code = NotFound desc = could not find container \"067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f\": container with ID starting with 067ea28b8f009fd6c91ad1a9cc300d777c77e0d95a56a6b66d4ba9c8753f111f not found: ID does not exist" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.740369 4727 scope.go:117] "RemoveContainer" containerID="838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6" Nov 21 21:18:59 crc kubenswrapper[4727]: E1121 21:18:59.740948 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6\": container with ID starting with 838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6 not found: ID does not exist" containerID="838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.741004 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6"} err="failed to get container status \"838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6\": rpc error: code = NotFound desc = could not find container \"838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6\": container with ID starting with 838791b34f8d90c2f6255a5eab2500aed599d915c77693a799379d19df55d2c6 not found: ID does not exist" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.741033 4727 scope.go:117] "RemoveContainer" containerID="257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0" Nov 21 21:18:59 crc kubenswrapper[4727]: E1121 21:18:59.741419 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0\": container with ID starting with 257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0 not found: ID does not exist" containerID="257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0" Nov 21 21:18:59 crc kubenswrapper[4727]: I1121 21:18:59.741446 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0"} err="failed to get container status \"257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0\": rpc error: code = NotFound desc = could not find container \"257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0\": container with ID starting with 257fd0151797c5622f65f1be5225c4dc7aad226022a1925d86a539cd34497cd0 not found: ID does not exist" Nov 21 21:19:01 crc kubenswrapper[4727]: I1121 21:19:01.515486 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9991fb-bb23-44f7-962d-998cf6e827d9" path="/var/lib/kubelet/pods/8e9991fb-bb23-44f7-962d-998cf6e827d9/volumes" Nov 21 21:19:13 crc kubenswrapper[4727]: I1121 21:19:13.335533 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:19:13 crc kubenswrapper[4727]: I1121 21:19:13.336557 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:19:13 crc kubenswrapper[4727]: I1121 21:19:13.336654 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 21:19:13 crc kubenswrapper[4727]: I1121 21:19:13.338120 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 21:19:13 crc kubenswrapper[4727]: I1121 21:19:13.338229 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" gracePeriod=600 Nov 21 21:19:13 crc kubenswrapper[4727]: E1121 21:19:13.475410 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:19:13 crc kubenswrapper[4727]: I1121 21:19:13.778585 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" exitCode=0 Nov 21 21:19:13 crc kubenswrapper[4727]: I1121 21:19:13.778640 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb"} Nov 21 21:19:13 crc kubenswrapper[4727]: I1121 21:19:13.778697 4727 scope.go:117] "RemoveContainer" containerID="7ae40f9e4c7fcee7dfaba722bc9ffd6452efe5dd7742306102d90362ec7cdef2" Nov 21 21:19:13 crc kubenswrapper[4727]: I1121 21:19:13.780433 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:19:13 crc kubenswrapper[4727]: E1121 21:19:13.781293 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:19:29 crc kubenswrapper[4727]: I1121 21:19:29.500132 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:19:29 crc kubenswrapper[4727]: E1121 21:19:29.502119 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:19:40 crc kubenswrapper[4727]: I1121 21:19:40.501480 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:19:40 crc kubenswrapper[4727]: E1121 21:19:40.502412 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:19:55 crc kubenswrapper[4727]: I1121 21:19:55.510295 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:19:55 crc kubenswrapper[4727]: E1121 21:19:55.511672 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.543197 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tvdsv"] Nov 21 21:20:02 crc kubenswrapper[4727]: E1121 21:20:02.544623 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerName="registry-server" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.544637 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerName="registry-server" Nov 21 21:20:02 crc kubenswrapper[4727]: E1121 21:20:02.544647 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerName="extract-utilities" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.544656 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerName="extract-utilities" Nov 21 21:20:02 crc kubenswrapper[4727]: E1121 21:20:02.544700 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerName="extract-content" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.544708 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerName="extract-content" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.544975 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9991fb-bb23-44f7-962d-998cf6e827d9" containerName="registry-server" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.546944 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.564197 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tvdsv"] Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.627829 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-catalog-content\") pod \"certified-operators-tvdsv\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.628494 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sppc\" (UniqueName: \"kubernetes.io/projected/9399872d-e9a7-4b41-b25b-58155c649711-kube-api-access-6sppc\") pod \"certified-operators-tvdsv\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.628529 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-utilities\") pod \"certified-operators-tvdsv\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.731917 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sppc\" (UniqueName: \"kubernetes.io/projected/9399872d-e9a7-4b41-b25b-58155c649711-kube-api-access-6sppc\") pod \"certified-operators-tvdsv\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.732028 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-utilities\") pod \"certified-operators-tvdsv\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.732201 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-catalog-content\") pod \"certified-operators-tvdsv\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.732699 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-utilities\") pod \"certified-operators-tvdsv\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.733163 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-catalog-content\") pod \"certified-operators-tvdsv\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.758087 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sppc\" (UniqueName: \"kubernetes.io/projected/9399872d-e9a7-4b41-b25b-58155c649711-kube-api-access-6sppc\") pod \"certified-operators-tvdsv\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:02 crc kubenswrapper[4727]: I1121 21:20:02.882290 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:03 crc kubenswrapper[4727]: I1121 21:20:03.455893 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tvdsv"] Nov 21 21:20:03 crc kubenswrapper[4727]: I1121 21:20:03.576709 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvdsv" event={"ID":"9399872d-e9a7-4b41-b25b-58155c649711","Type":"ContainerStarted","Data":"fc21b6e01bd7f63d1834a11f684842e8678057ae9ac47c74bea341049bb01e41"} Nov 21 21:20:05 crc kubenswrapper[4727]: I1121 21:20:05.634658 4727 generic.go:334] "Generic (PLEG): container finished" podID="9399872d-e9a7-4b41-b25b-58155c649711" containerID="6a70d3dc74010ffadfdb98d8be217c7161dbfbee40dd2a167dff285e17f43c40" exitCode=0 Nov 21 21:20:05 crc kubenswrapper[4727]: I1121 21:20:05.634851 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvdsv" event={"ID":"9399872d-e9a7-4b41-b25b-58155c649711","Type":"ContainerDied","Data":"6a70d3dc74010ffadfdb98d8be217c7161dbfbee40dd2a167dff285e17f43c40"} Nov 21 21:20:07 crc kubenswrapper[4727]: I1121 21:20:07.684851 4727 generic.go:334] "Generic (PLEG): container finished" podID="9399872d-e9a7-4b41-b25b-58155c649711" containerID="b8bb4ff6878c7f11ab8076f3964c4b3f8d1735932fd4ccff9bd3b34ce5e5b886" exitCode=0 Nov 21 21:20:07 crc kubenswrapper[4727]: I1121 21:20:07.684947 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvdsv" event={"ID":"9399872d-e9a7-4b41-b25b-58155c649711","Type":"ContainerDied","Data":"b8bb4ff6878c7f11ab8076f3964c4b3f8d1735932fd4ccff9bd3b34ce5e5b886"} Nov 21 21:20:08 crc kubenswrapper[4727]: I1121 21:20:08.704547 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvdsv" event={"ID":"9399872d-e9a7-4b41-b25b-58155c649711","Type":"ContainerStarted","Data":"7048696cc1bd6bc1bc4e78fabeedd33bb0597fad65f66ec9cec6f6c212d9d54f"} Nov 21 21:20:08 crc kubenswrapper[4727]: I1121 21:20:08.750750 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tvdsv" podStartSLOduration=4.29770906 podStartE2EDuration="6.750675504s" podCreationTimestamp="2025-11-21 21:20:02 +0000 UTC" firstStartedPulling="2025-11-21 21:20:05.64191986 +0000 UTC m=+4410.828104914" lastFinishedPulling="2025-11-21 21:20:08.094886274 +0000 UTC m=+4413.281071358" observedRunningTime="2025-11-21 21:20:08.732560841 +0000 UTC m=+4413.918745935" watchObservedRunningTime="2025-11-21 21:20:08.750675504 +0000 UTC m=+4413.936860548" Nov 21 21:20:09 crc kubenswrapper[4727]: I1121 21:20:09.499499 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:20:09 crc kubenswrapper[4727]: E1121 21:20:09.499769 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:20:12 crc kubenswrapper[4727]: I1121 21:20:12.883619 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:12 crc kubenswrapper[4727]: I1121 21:20:12.884657 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:12 crc kubenswrapper[4727]: I1121 21:20:12.973345 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:13 crc kubenswrapper[4727]: I1121 21:20:13.837485 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:13 crc kubenswrapper[4727]: I1121 21:20:13.903185 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tvdsv"] Nov 21 21:20:15 crc kubenswrapper[4727]: I1121 21:20:15.799022 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tvdsv" podUID="9399872d-e9a7-4b41-b25b-58155c649711" containerName="registry-server" containerID="cri-o://7048696cc1bd6bc1bc4e78fabeedd33bb0597fad65f66ec9cec6f6c212d9d54f" gracePeriod=2 Nov 21 21:20:16 crc kubenswrapper[4727]: I1121 21:20:16.814620 4727 generic.go:334] "Generic (PLEG): container finished" podID="9399872d-e9a7-4b41-b25b-58155c649711" containerID="7048696cc1bd6bc1bc4e78fabeedd33bb0597fad65f66ec9cec6f6c212d9d54f" exitCode=0 Nov 21 21:20:16 crc kubenswrapper[4727]: I1121 21:20:16.814703 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvdsv" event={"ID":"9399872d-e9a7-4b41-b25b-58155c649711","Type":"ContainerDied","Data":"7048696cc1bd6bc1bc4e78fabeedd33bb0597fad65f66ec9cec6f6c212d9d54f"} Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.324620 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.412828 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-utilities\") pod \"9399872d-e9a7-4b41-b25b-58155c649711\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.414525 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-catalog-content\") pod \"9399872d-e9a7-4b41-b25b-58155c649711\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.414592 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sppc\" (UniqueName: \"kubernetes.io/projected/9399872d-e9a7-4b41-b25b-58155c649711-kube-api-access-6sppc\") pod \"9399872d-e9a7-4b41-b25b-58155c649711\" (UID: \"9399872d-e9a7-4b41-b25b-58155c649711\") " Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.413863 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-utilities" (OuterVolumeSpecName: "utilities") pod "9399872d-e9a7-4b41-b25b-58155c649711" (UID: "9399872d-e9a7-4b41-b25b-58155c649711"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.415399 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.427870 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9399872d-e9a7-4b41-b25b-58155c649711-kube-api-access-6sppc" (OuterVolumeSpecName: "kube-api-access-6sppc") pod "9399872d-e9a7-4b41-b25b-58155c649711" (UID: "9399872d-e9a7-4b41-b25b-58155c649711"). InnerVolumeSpecName "kube-api-access-6sppc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.522769 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sppc\" (UniqueName: \"kubernetes.io/projected/9399872d-e9a7-4b41-b25b-58155c649711-kube-api-access-6sppc\") on node \"crc\" DevicePath \"\"" Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.598617 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9399872d-e9a7-4b41-b25b-58155c649711" (UID: "9399872d-e9a7-4b41-b25b-58155c649711"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.624688 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9399872d-e9a7-4b41-b25b-58155c649711-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.827704 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvdsv" event={"ID":"9399872d-e9a7-4b41-b25b-58155c649711","Type":"ContainerDied","Data":"fc21b6e01bd7f63d1834a11f684842e8678057ae9ac47c74bea341049bb01e41"} Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.827777 4727 scope.go:117] "RemoveContainer" containerID="7048696cc1bd6bc1bc4e78fabeedd33bb0597fad65f66ec9cec6f6c212d9d54f" Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.827952 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvdsv" Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.871927 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tvdsv"] Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.880877 4727 scope.go:117] "RemoveContainer" containerID="b8bb4ff6878c7f11ab8076f3964c4b3f8d1735932fd4ccff9bd3b34ce5e5b886" Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.884662 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tvdsv"] Nov 21 21:20:17 crc kubenswrapper[4727]: I1121 21:20:17.913770 4727 scope.go:117] "RemoveContainer" containerID="6a70d3dc74010ffadfdb98d8be217c7161dbfbee40dd2a167dff285e17f43c40" Nov 21 21:20:19 crc kubenswrapper[4727]: I1121 21:20:19.517499 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9399872d-e9a7-4b41-b25b-58155c649711" path="/var/lib/kubelet/pods/9399872d-e9a7-4b41-b25b-58155c649711/volumes" Nov 21 21:20:23 crc kubenswrapper[4727]: I1121 21:20:23.501791 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:20:23 crc kubenswrapper[4727]: E1121 21:20:23.503756 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:20:37 crc kubenswrapper[4727]: I1121 21:20:37.499725 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:20:37 crc kubenswrapper[4727]: E1121 21:20:37.500924 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:20:41 crc kubenswrapper[4727]: I1121 21:20:41.756484 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9bz5h"] Nov 21 21:20:41 crc kubenswrapper[4727]: E1121 21:20:41.758263 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9399872d-e9a7-4b41-b25b-58155c649711" containerName="extract-content" Nov 21 21:20:41 crc kubenswrapper[4727]: I1121 21:20:41.758287 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9399872d-e9a7-4b41-b25b-58155c649711" containerName="extract-content" Nov 21 21:20:41 crc kubenswrapper[4727]: E1121 21:20:41.758345 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9399872d-e9a7-4b41-b25b-58155c649711" containerName="extract-utilities" Nov 21 21:20:41 crc kubenswrapper[4727]: I1121 21:20:41.758359 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9399872d-e9a7-4b41-b25b-58155c649711" containerName="extract-utilities" Nov 21 21:20:41 crc kubenswrapper[4727]: E1121 21:20:41.758414 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9399872d-e9a7-4b41-b25b-58155c649711" containerName="registry-server" Nov 21 21:20:41 crc kubenswrapper[4727]: I1121 21:20:41.758432 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9399872d-e9a7-4b41-b25b-58155c649711" containerName="registry-server" Nov 21 21:20:41 crc kubenswrapper[4727]: I1121 21:20:41.758871 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9399872d-e9a7-4b41-b25b-58155c649711" containerName="registry-server" Nov 21 21:20:41 crc kubenswrapper[4727]: I1121 21:20:41.762239 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:41 crc kubenswrapper[4727]: I1121 21:20:41.789142 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9bz5h"] Nov 21 21:20:41 crc kubenswrapper[4727]: I1121 21:20:41.957663 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-utilities\") pod \"community-operators-9bz5h\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:41 crc kubenswrapper[4727]: I1121 21:20:41.957794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-catalog-content\") pod \"community-operators-9bz5h\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:41 crc kubenswrapper[4727]: I1121 21:20:41.957986 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xmh\" (UniqueName: \"kubernetes.io/projected/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-kube-api-access-m7xmh\") pod \"community-operators-9bz5h\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:42 crc kubenswrapper[4727]: I1121 21:20:42.061643 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xmh\" (UniqueName: \"kubernetes.io/projected/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-kube-api-access-m7xmh\") pod \"community-operators-9bz5h\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:42 crc kubenswrapper[4727]: I1121 21:20:42.062003 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-utilities\") pod \"community-operators-9bz5h\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:42 crc kubenswrapper[4727]: I1121 21:20:42.062084 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-catalog-content\") pod \"community-operators-9bz5h\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:42 crc kubenswrapper[4727]: I1121 21:20:42.062763 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-catalog-content\") pod \"community-operators-9bz5h\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:42 crc kubenswrapper[4727]: I1121 21:20:42.062775 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-utilities\") pod \"community-operators-9bz5h\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:42 crc kubenswrapper[4727]: I1121 21:20:42.085945 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xmh\" (UniqueName: \"kubernetes.io/projected/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-kube-api-access-m7xmh\") pod \"community-operators-9bz5h\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:42 crc kubenswrapper[4727]: I1121 21:20:42.111783 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:42 crc kubenswrapper[4727]: I1121 21:20:42.695131 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9bz5h"] Nov 21 21:20:43 crc kubenswrapper[4727]: I1121 21:20:43.472503 4727 generic.go:334] "Generic (PLEG): container finished" podID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerID="03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272" exitCode=0 Nov 21 21:20:43 crc kubenswrapper[4727]: I1121 21:20:43.473181 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bz5h" event={"ID":"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77","Type":"ContainerDied","Data":"03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272"} Nov 21 21:20:43 crc kubenswrapper[4727]: I1121 21:20:43.473373 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bz5h" event={"ID":"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77","Type":"ContainerStarted","Data":"abc443d72565a5a824fd4cee6738dc6961c10eb9f9de792eb68472fd7cfaf21d"} Nov 21 21:20:44 crc kubenswrapper[4727]: I1121 21:20:44.493429 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bz5h" event={"ID":"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77","Type":"ContainerStarted","Data":"49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3"} Nov 21 21:20:45 crc kubenswrapper[4727]: I1121 21:20:45.510010 4727 generic.go:334] "Generic (PLEG): container finished" podID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerID="49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3" exitCode=0 Nov 21 21:20:45 crc kubenswrapper[4727]: I1121 21:20:45.516393 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bz5h" event={"ID":"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77","Type":"ContainerDied","Data":"49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3"} Nov 21 21:20:47 crc kubenswrapper[4727]: I1121 21:20:47.546103 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bz5h" event={"ID":"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77","Type":"ContainerStarted","Data":"5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951"} Nov 21 21:20:47 crc kubenswrapper[4727]: I1121 21:20:47.572101 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9bz5h" podStartSLOduration=4.088781834 podStartE2EDuration="6.572079594s" podCreationTimestamp="2025-11-21 21:20:41 +0000 UTC" firstStartedPulling="2025-11-21 21:20:43.478382713 +0000 UTC m=+4448.664567787" lastFinishedPulling="2025-11-21 21:20:45.961680503 +0000 UTC m=+4451.147865547" observedRunningTime="2025-11-21 21:20:47.566338107 +0000 UTC m=+4452.752523151" watchObservedRunningTime="2025-11-21 21:20:47.572079594 +0000 UTC m=+4452.758264638" Nov 21 21:20:48 crc kubenswrapper[4727]: I1121 21:20:48.499458 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:20:48 crc kubenswrapper[4727]: E1121 21:20:48.500218 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:20:52 crc kubenswrapper[4727]: I1121 21:20:52.112349 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:52 crc kubenswrapper[4727]: I1121 21:20:52.113328 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:52 crc kubenswrapper[4727]: I1121 21:20:52.185103 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:52 crc kubenswrapper[4727]: I1121 21:20:52.712149 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:52 crc kubenswrapper[4727]: I1121 21:20:52.797061 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9bz5h"] Nov 21 21:20:54 crc kubenswrapper[4727]: I1121 21:20:54.645391 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9bz5h" podUID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerName="registry-server" containerID="cri-o://5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951" gracePeriod=2 Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.342872 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.479853 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-catalog-content\") pod \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.480744 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7xmh\" (UniqueName: \"kubernetes.io/projected/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-kube-api-access-m7xmh\") pod \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.481025 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-utilities\") pod \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\" (UID: \"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77\") " Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.481985 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-utilities" (OuterVolumeSpecName: "utilities") pod "24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" (UID: "24eafe66-b9c8-4f82-8f93-4a5e1fe42d77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.482442 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.489499 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-kube-api-access-m7xmh" (OuterVolumeSpecName: "kube-api-access-m7xmh") pod "24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" (UID: "24eafe66-b9c8-4f82-8f93-4a5e1fe42d77"). InnerVolumeSpecName "kube-api-access-m7xmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.532744 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" (UID: "24eafe66-b9c8-4f82-8f93-4a5e1fe42d77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.587611 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7xmh\" (UniqueName: \"kubernetes.io/projected/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-kube-api-access-m7xmh\") on node \"crc\" DevicePath \"\"" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.587648 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.660920 4727 generic.go:334] "Generic (PLEG): container finished" podID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerID="5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951" exitCode=0 Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.660995 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9bz5h" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.661028 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bz5h" event={"ID":"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77","Type":"ContainerDied","Data":"5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951"} Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.661110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9bz5h" event={"ID":"24eafe66-b9c8-4f82-8f93-4a5e1fe42d77","Type":"ContainerDied","Data":"abc443d72565a5a824fd4cee6738dc6961c10eb9f9de792eb68472fd7cfaf21d"} Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.661141 4727 scope.go:117] "RemoveContainer" containerID="5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.694265 4727 scope.go:117] "RemoveContainer" containerID="49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.697382 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9bz5h"] Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.708848 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9bz5h"] Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.720768 4727 scope.go:117] "RemoveContainer" containerID="03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.768667 4727 scope.go:117] "RemoveContainer" containerID="5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951" Nov 21 21:20:55 crc kubenswrapper[4727]: E1121 21:20:55.769346 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951\": container with ID starting with 5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951 not found: ID does not exist" containerID="5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.769393 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951"} err="failed to get container status \"5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951\": rpc error: code = NotFound desc = could not find container \"5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951\": container with ID starting with 5d507956c3312f76debdcf30c043f95853f8e00428f31b005cad0e1d81470951 not found: ID does not exist" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.769427 4727 scope.go:117] "RemoveContainer" containerID="49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3" Nov 21 21:20:55 crc kubenswrapper[4727]: E1121 21:20:55.769828 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3\": container with ID starting with 49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3 not found: ID does not exist" containerID="49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.769867 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3"} err="failed to get container status \"49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3\": rpc error: code = NotFound desc = could not find container \"49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3\": container with ID starting with 49e01fe53844e8179699e2fbd0d8f70cc573e1b8dc6b10a4866967943f514fe3 not found: ID does not exist" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.769901 4727 scope.go:117] "RemoveContainer" containerID="03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272" Nov 21 21:20:55 crc kubenswrapper[4727]: E1121 21:20:55.770236 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272\": container with ID starting with 03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272 not found: ID does not exist" containerID="03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272" Nov 21 21:20:55 crc kubenswrapper[4727]: I1121 21:20:55.770263 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272"} err="failed to get container status \"03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272\": rpc error: code = NotFound desc = could not find container \"03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272\": container with ID starting with 03d32af7e7c93d50605a95c80f5577ceea638602427b8c216e89f2a3fdf18272 not found: ID does not exist" Nov 21 21:20:57 crc kubenswrapper[4727]: I1121 21:20:57.528155 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" path="/var/lib/kubelet/pods/24eafe66-b9c8-4f82-8f93-4a5e1fe42d77/volumes" Nov 21 21:21:03 crc kubenswrapper[4727]: I1121 21:21:03.500294 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:21:03 crc kubenswrapper[4727]: E1121 21:21:03.501604 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:21:15 crc kubenswrapper[4727]: I1121 21:21:15.511512 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:21:15 crc kubenswrapper[4727]: E1121 21:21:15.514458 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:21:29 crc kubenswrapper[4727]: I1121 21:21:29.783925 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:21:29 crc kubenswrapper[4727]: E1121 21:21:29.802930 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:21:45 crc kubenswrapper[4727]: I1121 21:21:45.500577 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:21:45 crc kubenswrapper[4727]: E1121 21:21:45.501793 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:22:00 crc kubenswrapper[4727]: I1121 21:22:00.500437 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:22:00 crc kubenswrapper[4727]: E1121 21:22:00.501808 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:22:11 crc kubenswrapper[4727]: I1121 21:22:11.501236 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:22:11 crc kubenswrapper[4727]: E1121 21:22:11.502727 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:22:25 crc kubenswrapper[4727]: I1121 21:22:25.523610 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:22:25 crc kubenswrapper[4727]: E1121 21:22:25.525421 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:22:39 crc kubenswrapper[4727]: I1121 21:22:39.500818 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:22:39 crc kubenswrapper[4727]: E1121 21:22:39.502121 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:22:51 crc kubenswrapper[4727]: I1121 21:22:51.500470 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:22:51 crc kubenswrapper[4727]: E1121 21:22:51.502096 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:23:02 crc kubenswrapper[4727]: I1121 21:23:02.499524 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:23:02 crc kubenswrapper[4727]: E1121 21:23:02.500754 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:23:14 crc kubenswrapper[4727]: I1121 21:23:14.500146 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:23:14 crc kubenswrapper[4727]: E1121 21:23:14.501492 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:23:27 crc kubenswrapper[4727]: I1121 21:23:27.502150 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:23:27 crc kubenswrapper[4727]: E1121 21:23:27.503982 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:23:40 crc kubenswrapper[4727]: I1121 21:23:40.499162 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:23:40 crc kubenswrapper[4727]: E1121 21:23:40.500308 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:23:52 crc kubenswrapper[4727]: I1121 21:23:52.499735 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:23:52 crc kubenswrapper[4727]: E1121 21:23:52.501058 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:24:05 crc kubenswrapper[4727]: I1121 21:24:05.517706 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:24:05 crc kubenswrapper[4727]: E1121 21:24:05.519249 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:24:19 crc kubenswrapper[4727]: I1121 21:24:19.500500 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:24:20 crc kubenswrapper[4727]: I1121 21:24:20.699149 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"30f73b0822b19c755441e3b7b81ef5e9d5a5fafc212006c896eefd71beb0496d"} Nov 21 21:26:43 crc kubenswrapper[4727]: I1121 21:26:43.335636 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:26:43 crc kubenswrapper[4727]: I1121 21:26:43.336253 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:27:07 crc kubenswrapper[4727]: I1121 21:27:07.949144 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gxxkn"] Nov 21 21:27:07 crc kubenswrapper[4727]: E1121 21:27:07.950815 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerName="extract-content" Nov 21 21:27:07 crc kubenswrapper[4727]: I1121 21:27:07.950844 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerName="extract-content" Nov 21 21:27:07 crc kubenswrapper[4727]: E1121 21:27:07.950870 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerName="registry-server" Nov 21 21:27:07 crc kubenswrapper[4727]: I1121 21:27:07.950883 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerName="registry-server" Nov 21 21:27:07 crc kubenswrapper[4727]: E1121 21:27:07.950953 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerName="extract-utilities" Nov 21 21:27:07 crc kubenswrapper[4727]: I1121 21:27:07.950997 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerName="extract-utilities" Nov 21 21:27:07 crc kubenswrapper[4727]: I1121 21:27:07.951504 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="24eafe66-b9c8-4f82-8f93-4a5e1fe42d77" containerName="registry-server" Nov 21 21:27:07 crc kubenswrapper[4727]: I1121 21:27:07.955166 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:07 crc kubenswrapper[4727]: I1121 21:27:07.968022 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxxkn"] Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.081113 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-utilities\") pod \"redhat-marketplace-gxxkn\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.081188 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-catalog-content\") pod \"redhat-marketplace-gxxkn\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.081262 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qblhg\" (UniqueName: \"kubernetes.io/projected/49aaa8f4-e683-458e-8ce0-1773bbecf541-kube-api-access-qblhg\") pod \"redhat-marketplace-gxxkn\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.183837 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-utilities\") pod \"redhat-marketplace-gxxkn\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.183946 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-catalog-content\") pod \"redhat-marketplace-gxxkn\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.184150 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qblhg\" (UniqueName: \"kubernetes.io/projected/49aaa8f4-e683-458e-8ce0-1773bbecf541-kube-api-access-qblhg\") pod \"redhat-marketplace-gxxkn\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.184378 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-utilities\") pod \"redhat-marketplace-gxxkn\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.184502 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-catalog-content\") pod \"redhat-marketplace-gxxkn\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.206006 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qblhg\" (UniqueName: \"kubernetes.io/projected/49aaa8f4-e683-458e-8ce0-1773bbecf541-kube-api-access-qblhg\") pod \"redhat-marketplace-gxxkn\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.294241 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:08 crc kubenswrapper[4727]: W1121 21:27:08.803331 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49aaa8f4_e683_458e_8ce0_1773bbecf541.slice/crio-6c8ff3c290d16c41a562f075a8b94567e7e0e9821c5dca0fc5a2aeecf99d6814 WatchSource:0}: Error finding container 6c8ff3c290d16c41a562f075a8b94567e7e0e9821c5dca0fc5a2aeecf99d6814: Status 404 returned error can't find the container with id 6c8ff3c290d16c41a562f075a8b94567e7e0e9821c5dca0fc5a2aeecf99d6814 Nov 21 21:27:08 crc kubenswrapper[4727]: I1121 21:27:08.805291 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxxkn"] Nov 21 21:27:09 crc kubenswrapper[4727]: I1121 21:27:09.116180 4727 generic.go:334] "Generic (PLEG): container finished" podID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerID="0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5" exitCode=0 Nov 21 21:27:09 crc kubenswrapper[4727]: I1121 21:27:09.116257 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxxkn" event={"ID":"49aaa8f4-e683-458e-8ce0-1773bbecf541","Type":"ContainerDied","Data":"0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5"} Nov 21 21:27:09 crc kubenswrapper[4727]: I1121 21:27:09.116587 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxxkn" event={"ID":"49aaa8f4-e683-458e-8ce0-1773bbecf541","Type":"ContainerStarted","Data":"6c8ff3c290d16c41a562f075a8b94567e7e0e9821c5dca0fc5a2aeecf99d6814"} Nov 21 21:27:09 crc kubenswrapper[4727]: I1121 21:27:09.119759 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 21:27:10 crc kubenswrapper[4727]: I1121 21:27:10.130153 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxxkn" event={"ID":"49aaa8f4-e683-458e-8ce0-1773bbecf541","Type":"ContainerStarted","Data":"57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4"} Nov 21 21:27:11 crc kubenswrapper[4727]: I1121 21:27:11.144326 4727 generic.go:334] "Generic (PLEG): container finished" podID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerID="57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4" exitCode=0 Nov 21 21:27:11 crc kubenswrapper[4727]: I1121 21:27:11.144375 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxxkn" event={"ID":"49aaa8f4-e683-458e-8ce0-1773bbecf541","Type":"ContainerDied","Data":"57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4"} Nov 21 21:27:12 crc kubenswrapper[4727]: I1121 21:27:12.171791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxxkn" event={"ID":"49aaa8f4-e683-458e-8ce0-1773bbecf541","Type":"ContainerStarted","Data":"eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9"} Nov 21 21:27:12 crc kubenswrapper[4727]: I1121 21:27:12.191869 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gxxkn" podStartSLOduration=2.572228114 podStartE2EDuration="5.191850605s" podCreationTimestamp="2025-11-21 21:27:07 +0000 UTC" firstStartedPulling="2025-11-21 21:27:09.119532754 +0000 UTC m=+4834.305717798" lastFinishedPulling="2025-11-21 21:27:11.739155245 +0000 UTC m=+4836.925340289" observedRunningTime="2025-11-21 21:27:12.188190997 +0000 UTC m=+4837.374376041" watchObservedRunningTime="2025-11-21 21:27:12.191850605 +0000 UTC m=+4837.378035649" Nov 21 21:27:13 crc kubenswrapper[4727]: I1121 21:27:13.336074 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:27:13 crc kubenswrapper[4727]: I1121 21:27:13.336426 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:27:18 crc kubenswrapper[4727]: I1121 21:27:18.294584 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:18 crc kubenswrapper[4727]: I1121 21:27:18.295155 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:18 crc kubenswrapper[4727]: I1121 21:27:18.350367 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:19 crc kubenswrapper[4727]: I1121 21:27:19.356562 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:19 crc kubenswrapper[4727]: I1121 21:27:19.521803 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxxkn"] Nov 21 21:27:21 crc kubenswrapper[4727]: I1121 21:27:21.285679 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gxxkn" podUID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerName="registry-server" containerID="cri-o://eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9" gracePeriod=2 Nov 21 21:27:21 crc kubenswrapper[4727]: I1121 21:27:21.916273 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:21 crc kubenswrapper[4727]: I1121 21:27:21.977139 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qblhg\" (UniqueName: \"kubernetes.io/projected/49aaa8f4-e683-458e-8ce0-1773bbecf541-kube-api-access-qblhg\") pod \"49aaa8f4-e683-458e-8ce0-1773bbecf541\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " Nov 21 21:27:21 crc kubenswrapper[4727]: I1121 21:27:21.977318 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-utilities\") pod \"49aaa8f4-e683-458e-8ce0-1773bbecf541\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " Nov 21 21:27:21 crc kubenswrapper[4727]: I1121 21:27:21.977456 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-catalog-content\") pod \"49aaa8f4-e683-458e-8ce0-1773bbecf541\" (UID: \"49aaa8f4-e683-458e-8ce0-1773bbecf541\") " Nov 21 21:27:21 crc kubenswrapper[4727]: I1121 21:27:21.978388 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-utilities" (OuterVolumeSpecName: "utilities") pod "49aaa8f4-e683-458e-8ce0-1773bbecf541" (UID: "49aaa8f4-e683-458e-8ce0-1773bbecf541"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:27:21 crc kubenswrapper[4727]: I1121 21:27:21.985924 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49aaa8f4-e683-458e-8ce0-1773bbecf541-kube-api-access-qblhg" (OuterVolumeSpecName: "kube-api-access-qblhg") pod "49aaa8f4-e683-458e-8ce0-1773bbecf541" (UID: "49aaa8f4-e683-458e-8ce0-1773bbecf541"). InnerVolumeSpecName "kube-api-access-qblhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:27:21 crc kubenswrapper[4727]: I1121 21:27:21.999396 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49aaa8f4-e683-458e-8ce0-1773bbecf541" (UID: "49aaa8f4-e683-458e-8ce0-1773bbecf541"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.081148 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qblhg\" (UniqueName: \"kubernetes.io/projected/49aaa8f4-e683-458e-8ce0-1773bbecf541-kube-api-access-qblhg\") on node \"crc\" DevicePath \"\"" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.081191 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.081205 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49aaa8f4-e683-458e-8ce0-1773bbecf541-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.309918 4727 generic.go:334] "Generic (PLEG): container finished" podID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerID="eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9" exitCode=0 Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.310011 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxxkn" event={"ID":"49aaa8f4-e683-458e-8ce0-1773bbecf541","Type":"ContainerDied","Data":"eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9"} Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.310056 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxxkn" event={"ID":"49aaa8f4-e683-458e-8ce0-1773bbecf541","Type":"ContainerDied","Data":"6c8ff3c290d16c41a562f075a8b94567e7e0e9821c5dca0fc5a2aeecf99d6814"} Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.310089 4727 scope.go:117] "RemoveContainer" containerID="eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.310592 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxxkn" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.347813 4727 scope.go:117] "RemoveContainer" containerID="57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.360945 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxxkn"] Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.371795 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxxkn"] Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.385293 4727 scope.go:117] "RemoveContainer" containerID="0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.436373 4727 scope.go:117] "RemoveContainer" containerID="eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9" Nov 21 21:27:22 crc kubenswrapper[4727]: E1121 21:27:22.436888 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9\": container with ID starting with eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9 not found: ID does not exist" containerID="eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.436943 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9"} err="failed to get container status \"eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9\": rpc error: code = NotFound desc = could not find container \"eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9\": container with ID starting with eb4b6b5db58eb0359d3703e8d01b661d4356e037ea7510b3498c17b1275960e9 not found: ID does not exist" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.436995 4727 scope.go:117] "RemoveContainer" containerID="57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4" Nov 21 21:27:22 crc kubenswrapper[4727]: E1121 21:27:22.437575 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4\": container with ID starting with 57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4 not found: ID does not exist" containerID="57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.437624 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4"} err="failed to get container status \"57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4\": rpc error: code = NotFound desc = could not find container \"57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4\": container with ID starting with 57bf806354b3d58177a1eb26fee686ddb014035c7f3a735fc2c1993d34ccd9e4 not found: ID does not exist" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.437658 4727 scope.go:117] "RemoveContainer" containerID="0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5" Nov 21 21:27:22 crc kubenswrapper[4727]: E1121 21:27:22.438200 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5\": container with ID starting with 0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5 not found: ID does not exist" containerID="0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5" Nov 21 21:27:22 crc kubenswrapper[4727]: I1121 21:27:22.438228 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5"} err="failed to get container status \"0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5\": rpc error: code = NotFound desc = could not find container \"0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5\": container with ID starting with 0d92179c07dad3919828594bbee73cbf1c1c86f4c40f228312d7c6d688edeaa5 not found: ID does not exist" Nov 21 21:27:23 crc kubenswrapper[4727]: I1121 21:27:23.521380 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49aaa8f4-e683-458e-8ce0-1773bbecf541" path="/var/lib/kubelet/pods/49aaa8f4-e683-458e-8ce0-1773bbecf541/volumes" Nov 21 21:27:43 crc kubenswrapper[4727]: I1121 21:27:43.335369 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:27:43 crc kubenswrapper[4727]: I1121 21:27:43.336025 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:27:43 crc kubenswrapper[4727]: I1121 21:27:43.336095 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 21:27:43 crc kubenswrapper[4727]: I1121 21:27:43.337410 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30f73b0822b19c755441e3b7b81ef5e9d5a5fafc212006c896eefd71beb0496d"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 21:27:43 crc kubenswrapper[4727]: I1121 21:27:43.337513 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://30f73b0822b19c755441e3b7b81ef5e9d5a5fafc212006c896eefd71beb0496d" gracePeriod=600 Nov 21 21:27:43 crc kubenswrapper[4727]: I1121 21:27:43.607812 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="30f73b0822b19c755441e3b7b81ef5e9d5a5fafc212006c896eefd71beb0496d" exitCode=0 Nov 21 21:27:43 crc kubenswrapper[4727]: I1121 21:27:43.607907 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"30f73b0822b19c755441e3b7b81ef5e9d5a5fafc212006c896eefd71beb0496d"} Nov 21 21:27:43 crc kubenswrapper[4727]: I1121 21:27:43.608592 4727 scope.go:117] "RemoveContainer" containerID="0fa05a8295e880d3feee61c5c714acfb239ca392d85b898e7a19a77e1ec790cb" Nov 21 21:27:44 crc kubenswrapper[4727]: I1121 21:27:44.633578 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872"} Nov 21 21:29:35 crc kubenswrapper[4727]: E1121 21:29:35.926895 4727 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.179:43774->38.102.83.179:43311: read tcp 38.102.83.179:43774->38.102.83.179:43311: read: connection reset by peer Nov 21 21:29:43 crc kubenswrapper[4727]: I1121 21:29:43.335515 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:29:43 crc kubenswrapper[4727]: I1121 21:29:43.336005 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.159386 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt"] Nov 21 21:30:00 crc kubenswrapper[4727]: E1121 21:30:00.160364 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerName="extract-content" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.160377 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerName="extract-content" Nov 21 21:30:00 crc kubenswrapper[4727]: E1121 21:30:00.160427 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerName="registry-server" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.160435 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerName="registry-server" Nov 21 21:30:00 crc kubenswrapper[4727]: E1121 21:30:00.160452 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerName="extract-utilities" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.160460 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerName="extract-utilities" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.160668 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="49aaa8f4-e683-458e-8ce0-1773bbecf541" containerName="registry-server" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.161511 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.167508 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.167513 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.182365 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt"] Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.227535 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33167c55-388a-4b7c-91a4-5284c3bf991d-config-volume\") pod \"collect-profiles-29396010-gn5dt\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.227650 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33167c55-388a-4b7c-91a4-5284c3bf991d-secret-volume\") pod \"collect-profiles-29396010-gn5dt\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.227808 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2cpv\" (UniqueName: \"kubernetes.io/projected/33167c55-388a-4b7c-91a4-5284c3bf991d-kube-api-access-q2cpv\") pod \"collect-profiles-29396010-gn5dt\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.330215 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33167c55-388a-4b7c-91a4-5284c3bf991d-config-volume\") pod \"collect-profiles-29396010-gn5dt\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.330316 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33167c55-388a-4b7c-91a4-5284c3bf991d-secret-volume\") pod \"collect-profiles-29396010-gn5dt\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.330459 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cpv\" (UniqueName: \"kubernetes.io/projected/33167c55-388a-4b7c-91a4-5284c3bf991d-kube-api-access-q2cpv\") pod \"collect-profiles-29396010-gn5dt\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.331751 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33167c55-388a-4b7c-91a4-5284c3bf991d-config-volume\") pod \"collect-profiles-29396010-gn5dt\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.337511 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33167c55-388a-4b7c-91a4-5284c3bf991d-secret-volume\") pod \"collect-profiles-29396010-gn5dt\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.351851 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2cpv\" (UniqueName: \"kubernetes.io/projected/33167c55-388a-4b7c-91a4-5284c3bf991d-kube-api-access-q2cpv\") pod \"collect-profiles-29396010-gn5dt\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:00 crc kubenswrapper[4727]: I1121 21:30:00.494942 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:01 crc kubenswrapper[4727]: I1121 21:30:01.003476 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt"] Nov 21 21:30:01 crc kubenswrapper[4727]: I1121 21:30:01.464930 4727 generic.go:334] "Generic (PLEG): container finished" podID="33167c55-388a-4b7c-91a4-5284c3bf991d" containerID="4a9a6b07f21fb06b7fc896d1a70bf9b25fdb0e2651e71c109256f6327df13d54" exitCode=0 Nov 21 21:30:01 crc kubenswrapper[4727]: I1121 21:30:01.465017 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" event={"ID":"33167c55-388a-4b7c-91a4-5284c3bf991d","Type":"ContainerDied","Data":"4a9a6b07f21fb06b7fc896d1a70bf9b25fdb0e2651e71c109256f6327df13d54"} Nov 21 21:30:01 crc kubenswrapper[4727]: I1121 21:30:01.465457 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" event={"ID":"33167c55-388a-4b7c-91a4-5284c3bf991d","Type":"ContainerStarted","Data":"ed4f612b3ab879c085aebb4a4679d8e2599626cdabc505d891747e026ecde25e"} Nov 21 21:30:02 crc kubenswrapper[4727]: I1121 21:30:02.897808 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.009072 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2cpv\" (UniqueName: \"kubernetes.io/projected/33167c55-388a-4b7c-91a4-5284c3bf991d-kube-api-access-q2cpv\") pod \"33167c55-388a-4b7c-91a4-5284c3bf991d\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.009494 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33167c55-388a-4b7c-91a4-5284c3bf991d-config-volume\") pod \"33167c55-388a-4b7c-91a4-5284c3bf991d\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.009738 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33167c55-388a-4b7c-91a4-5284c3bf991d-secret-volume\") pod \"33167c55-388a-4b7c-91a4-5284c3bf991d\" (UID: \"33167c55-388a-4b7c-91a4-5284c3bf991d\") " Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.010206 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33167c55-388a-4b7c-91a4-5284c3bf991d-config-volume" (OuterVolumeSpecName: "config-volume") pod "33167c55-388a-4b7c-91a4-5284c3bf991d" (UID: "33167c55-388a-4b7c-91a4-5284c3bf991d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.010528 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33167c55-388a-4b7c-91a4-5284c3bf991d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.015809 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33167c55-388a-4b7c-91a4-5284c3bf991d-kube-api-access-q2cpv" (OuterVolumeSpecName: "kube-api-access-q2cpv") pod "33167c55-388a-4b7c-91a4-5284c3bf991d" (UID: "33167c55-388a-4b7c-91a4-5284c3bf991d"). InnerVolumeSpecName "kube-api-access-q2cpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.027771 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33167c55-388a-4b7c-91a4-5284c3bf991d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "33167c55-388a-4b7c-91a4-5284c3bf991d" (UID: "33167c55-388a-4b7c-91a4-5284c3bf991d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.112289 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2cpv\" (UniqueName: \"kubernetes.io/projected/33167c55-388a-4b7c-91a4-5284c3bf991d-kube-api-access-q2cpv\") on node \"crc\" DevicePath \"\"" Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.112645 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/33167c55-388a-4b7c-91a4-5284c3bf991d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.494161 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" event={"ID":"33167c55-388a-4b7c-91a4-5284c3bf991d","Type":"ContainerDied","Data":"ed4f612b3ab879c085aebb4a4679d8e2599626cdabc505d891747e026ecde25e"} Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.494724 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed4f612b3ab879c085aebb4a4679d8e2599626cdabc505d891747e026ecde25e" Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.494228 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt" Nov 21 21:30:03 crc kubenswrapper[4727]: I1121 21:30:03.996021 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn"] Nov 21 21:30:04 crc kubenswrapper[4727]: I1121 21:30:04.024562 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395965-v8xnn"] Nov 21 21:30:05 crc kubenswrapper[4727]: I1121 21:30:05.517516 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81419a0f-aa23-4559-b050-c4f71ab63409" path="/var/lib/kubelet/pods/81419a0f-aa23-4559-b050-c4f71ab63409/volumes" Nov 21 21:30:13 crc kubenswrapper[4727]: I1121 21:30:13.335152 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:30:13 crc kubenswrapper[4727]: I1121 21:30:13.335808 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.313586 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p246x"] Nov 21 21:30:24 crc kubenswrapper[4727]: E1121 21:30:24.314905 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33167c55-388a-4b7c-91a4-5284c3bf991d" containerName="collect-profiles" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.314922 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="33167c55-388a-4b7c-91a4-5284c3bf991d" containerName="collect-profiles" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.315318 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="33167c55-388a-4b7c-91a4-5284c3bf991d" containerName="collect-profiles" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.317750 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.345349 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p246x"] Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.406865 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-catalog-content\") pod \"certified-operators-p246x\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.407078 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-utilities\") pod \"certified-operators-p246x\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.407584 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrq54\" (UniqueName: \"kubernetes.io/projected/b7cddb78-6ecc-4296-8bb7-9613563b51f0-kube-api-access-hrq54\") pod \"certified-operators-p246x\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.510280 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-catalog-content\") pod \"certified-operators-p246x\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.510393 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-utilities\") pod \"certified-operators-p246x\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.510725 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrq54\" (UniqueName: \"kubernetes.io/projected/b7cddb78-6ecc-4296-8bb7-9613563b51f0-kube-api-access-hrq54\") pod \"certified-operators-p246x\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.510923 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-catalog-content\") pod \"certified-operators-p246x\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.511088 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-utilities\") pod \"certified-operators-p246x\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.534045 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrq54\" (UniqueName: \"kubernetes.io/projected/b7cddb78-6ecc-4296-8bb7-9613563b51f0-kube-api-access-hrq54\") pod \"certified-operators-p246x\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:24 crc kubenswrapper[4727]: I1121 21:30:24.653014 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:25 crc kubenswrapper[4727]: I1121 21:30:25.197800 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p246x"] Nov 21 21:30:25 crc kubenswrapper[4727]: I1121 21:30:25.831257 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerID="fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb" exitCode=0 Nov 21 21:30:25 crc kubenswrapper[4727]: I1121 21:30:25.831321 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p246x" event={"ID":"b7cddb78-6ecc-4296-8bb7-9613563b51f0","Type":"ContainerDied","Data":"fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb"} Nov 21 21:30:25 crc kubenswrapper[4727]: I1121 21:30:25.831668 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p246x" event={"ID":"b7cddb78-6ecc-4296-8bb7-9613563b51f0","Type":"ContainerStarted","Data":"1d9bc1c082e9b6a5d7fa4ac202f0453f615497bad9b237610a793a19480f39ad"} Nov 21 21:30:27 crc kubenswrapper[4727]: I1121 21:30:27.878304 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerID="d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5" exitCode=0 Nov 21 21:30:27 crc kubenswrapper[4727]: I1121 21:30:27.878580 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p246x" event={"ID":"b7cddb78-6ecc-4296-8bb7-9613563b51f0","Type":"ContainerDied","Data":"d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5"} Nov 21 21:30:28 crc kubenswrapper[4727]: I1121 21:30:28.898906 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p246x" event={"ID":"b7cddb78-6ecc-4296-8bb7-9613563b51f0","Type":"ContainerStarted","Data":"79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2"} Nov 21 21:30:28 crc kubenswrapper[4727]: I1121 21:30:28.932292 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p246x" podStartSLOduration=2.474656983 podStartE2EDuration="4.932266053s" podCreationTimestamp="2025-11-21 21:30:24 +0000 UTC" firstStartedPulling="2025-11-21 21:30:25.833323753 +0000 UTC m=+5031.019508797" lastFinishedPulling="2025-11-21 21:30:28.290932823 +0000 UTC m=+5033.477117867" observedRunningTime="2025-11-21 21:30:28.92257327 +0000 UTC m=+5034.108758324" watchObservedRunningTime="2025-11-21 21:30:28.932266053 +0000 UTC m=+5034.118451107" Nov 21 21:30:34 crc kubenswrapper[4727]: I1121 21:30:34.653872 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:34 crc kubenswrapper[4727]: I1121 21:30:34.656045 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:34 crc kubenswrapper[4727]: I1121 21:30:34.744402 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:35 crc kubenswrapper[4727]: I1121 21:30:35.089120 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:35 crc kubenswrapper[4727]: I1121 21:30:35.163821 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p246x"] Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.038283 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p246x" podUID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerName="registry-server" containerID="cri-o://79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2" gracePeriod=2 Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.589683 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.706869 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrq54\" (UniqueName: \"kubernetes.io/projected/b7cddb78-6ecc-4296-8bb7-9613563b51f0-kube-api-access-hrq54\") pod \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.706952 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-catalog-content\") pod \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.707247 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-utilities\") pod \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\" (UID: \"b7cddb78-6ecc-4296-8bb7-9613563b51f0\") " Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.708343 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-utilities" (OuterVolumeSpecName: "utilities") pod "b7cddb78-6ecc-4296-8bb7-9613563b51f0" (UID: "b7cddb78-6ecc-4296-8bb7-9613563b51f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.715474 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7cddb78-6ecc-4296-8bb7-9613563b51f0-kube-api-access-hrq54" (OuterVolumeSpecName: "kube-api-access-hrq54") pod "b7cddb78-6ecc-4296-8bb7-9613563b51f0" (UID: "b7cddb78-6ecc-4296-8bb7-9613563b51f0"). InnerVolumeSpecName "kube-api-access-hrq54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.759616 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7cddb78-6ecc-4296-8bb7-9613563b51f0" (UID: "b7cddb78-6ecc-4296-8bb7-9613563b51f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.809801 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrq54\" (UniqueName: \"kubernetes.io/projected/b7cddb78-6ecc-4296-8bb7-9613563b51f0-kube-api-access-hrq54\") on node \"crc\" DevicePath \"\"" Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.809837 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:30:37 crc kubenswrapper[4727]: I1121 21:30:37.809849 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7cddb78-6ecc-4296-8bb7-9613563b51f0-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.053737 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerID="79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2" exitCode=0 Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.053824 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p246x" Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.053824 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p246x" event={"ID":"b7cddb78-6ecc-4296-8bb7-9613563b51f0","Type":"ContainerDied","Data":"79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2"} Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.053925 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p246x" event={"ID":"b7cddb78-6ecc-4296-8bb7-9613563b51f0","Type":"ContainerDied","Data":"1d9bc1c082e9b6a5d7fa4ac202f0453f615497bad9b237610a793a19480f39ad"} Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.054051 4727 scope.go:117] "RemoveContainer" containerID="79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2" Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.088102 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p246x"] Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.096610 4727 scope.go:117] "RemoveContainer" containerID="d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5" Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.098389 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p246x"] Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.133536 4727 scope.go:117] "RemoveContainer" containerID="fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb" Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.194526 4727 scope.go:117] "RemoveContainer" containerID="79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2" Nov 21 21:30:38 crc kubenswrapper[4727]: E1121 21:30:38.195390 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2\": container with ID starting with 79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2 not found: ID does not exist" containerID="79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2" Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.195444 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2"} err="failed to get container status \"79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2\": rpc error: code = NotFound desc = could not find container \"79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2\": container with ID starting with 79e30f8d604a274896213357e873d36e2f95021e384bdbdc7f770d62416778d2 not found: ID does not exist" Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.195481 4727 scope.go:117] "RemoveContainer" containerID="d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5" Nov 21 21:30:38 crc kubenswrapper[4727]: E1121 21:30:38.197187 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5\": container with ID starting with d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5 not found: ID does not exist" containerID="d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5" Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.197229 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5"} err="failed to get container status \"d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5\": rpc error: code = NotFound desc = could not find container \"d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5\": container with ID starting with d5c3a65947af6f2f8db238c3226cf98b98b2a3be15ffe1d5f7f7cb333811d9c5 not found: ID does not exist" Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.197265 4727 scope.go:117] "RemoveContainer" containerID="fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb" Nov 21 21:30:38 crc kubenswrapper[4727]: E1121 21:30:38.197606 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb\": container with ID starting with fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb not found: ID does not exist" containerID="fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb" Nov 21 21:30:38 crc kubenswrapper[4727]: I1121 21:30:38.197660 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb"} err="failed to get container status \"fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb\": rpc error: code = NotFound desc = could not find container \"fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb\": container with ID starting with fab1490cecdb907d47a767535f3508d97649148d90a408b047b4737b06e0e6eb not found: ID does not exist" Nov 21 21:30:39 crc kubenswrapper[4727]: I1121 21:30:39.515344 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" path="/var/lib/kubelet/pods/b7cddb78-6ecc-4296-8bb7-9613563b51f0/volumes" Nov 21 21:30:43 crc kubenswrapper[4727]: I1121 21:30:43.335727 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:30:43 crc kubenswrapper[4727]: I1121 21:30:43.336359 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:30:43 crc kubenswrapper[4727]: I1121 21:30:43.336418 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 21:30:43 crc kubenswrapper[4727]: I1121 21:30:43.337363 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 21:30:43 crc kubenswrapper[4727]: I1121 21:30:43.337461 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" gracePeriod=600 Nov 21 21:30:43 crc kubenswrapper[4727]: E1121 21:30:43.459527 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:30:44 crc kubenswrapper[4727]: I1121 21:30:44.130743 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" exitCode=0 Nov 21 21:30:44 crc kubenswrapper[4727]: I1121 21:30:44.131177 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872"} Nov 21 21:30:44 crc kubenswrapper[4727]: I1121 21:30:44.131211 4727 scope.go:117] "RemoveContainer" containerID="30f73b0822b19c755441e3b7b81ef5e9d5a5fafc212006c896eefd71beb0496d" Nov 21 21:30:44 crc kubenswrapper[4727]: I1121 21:30:44.131974 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:30:44 crc kubenswrapper[4727]: E1121 21:30:44.132232 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:30:48 crc kubenswrapper[4727]: I1121 21:30:48.638168 4727 scope.go:117] "RemoveContainer" containerID="323331ab81ebc0887d64bf4ab610ddf4241a9396b2472e3ad021f91a16698847" Nov 21 21:30:52 crc kubenswrapper[4727]: E1121 21:30:52.057161 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Nov 21 21:30:58 crc kubenswrapper[4727]: I1121 21:30:58.499490 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:30:58 crc kubenswrapper[4727]: E1121 21:30:58.500368 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:31:09 crc kubenswrapper[4727]: I1121 21:31:09.504106 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:31:09 crc kubenswrapper[4727]: E1121 21:31:09.506504 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:31:21 crc kubenswrapper[4727]: I1121 21:31:21.499519 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:31:21 crc kubenswrapper[4727]: E1121 21:31:21.500610 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.606878 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-stq2s"] Nov 21 21:31:29 crc kubenswrapper[4727]: E1121 21:31:29.607971 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerName="extract-utilities" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.607988 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerName="extract-utilities" Nov 21 21:31:29 crc kubenswrapper[4727]: E1121 21:31:29.608007 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerName="extract-content" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.608018 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerName="extract-content" Nov 21 21:31:29 crc kubenswrapper[4727]: E1121 21:31:29.608068 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerName="registry-server" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.608077 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerName="registry-server" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.608346 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7cddb78-6ecc-4296-8bb7-9613563b51f0" containerName="registry-server" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.610239 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.633454 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-stq2s"] Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.704983 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-utilities\") pod \"community-operators-stq2s\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.705224 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m86q6\" (UniqueName: \"kubernetes.io/projected/5b983132-0cbc-4ee1-aabe-0e426616ec0a-kube-api-access-m86q6\") pod \"community-operators-stq2s\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.705425 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-catalog-content\") pod \"community-operators-stq2s\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.807839 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-catalog-content\") pod \"community-operators-stq2s\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.807931 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-utilities\") pod \"community-operators-stq2s\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.808132 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m86q6\" (UniqueName: \"kubernetes.io/projected/5b983132-0cbc-4ee1-aabe-0e426616ec0a-kube-api-access-m86q6\") pod \"community-operators-stq2s\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.808495 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-catalog-content\") pod \"community-operators-stq2s\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.808585 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-utilities\") pod \"community-operators-stq2s\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.831457 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m86q6\" (UniqueName: \"kubernetes.io/projected/5b983132-0cbc-4ee1-aabe-0e426616ec0a-kube-api-access-m86q6\") pod \"community-operators-stq2s\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:29 crc kubenswrapper[4727]: I1121 21:31:29.938831 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:30 crc kubenswrapper[4727]: I1121 21:31:30.470784 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-stq2s"] Nov 21 21:31:30 crc kubenswrapper[4727]: I1121 21:31:30.681303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stq2s" event={"ID":"5b983132-0cbc-4ee1-aabe-0e426616ec0a","Type":"ContainerStarted","Data":"89e9dc41def17e4231a7962b2376895da95c6f72cac812c6b028f1cb1827e0b7"} Nov 21 21:31:31 crc kubenswrapper[4727]: I1121 21:31:31.696815 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerID="c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6" exitCode=0 Nov 21 21:31:31 crc kubenswrapper[4727]: I1121 21:31:31.696868 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stq2s" event={"ID":"5b983132-0cbc-4ee1-aabe-0e426616ec0a","Type":"ContainerDied","Data":"c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6"} Nov 21 21:31:32 crc kubenswrapper[4727]: I1121 21:31:32.724633 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stq2s" event={"ID":"5b983132-0cbc-4ee1-aabe-0e426616ec0a","Type":"ContainerStarted","Data":"47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f"} Nov 21 21:31:33 crc kubenswrapper[4727]: I1121 21:31:33.737199 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerID="47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f" exitCode=0 Nov 21 21:31:33 crc kubenswrapper[4727]: I1121 21:31:33.737276 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stq2s" event={"ID":"5b983132-0cbc-4ee1-aabe-0e426616ec0a","Type":"ContainerDied","Data":"47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f"} Nov 21 21:31:34 crc kubenswrapper[4727]: I1121 21:31:34.752404 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stq2s" event={"ID":"5b983132-0cbc-4ee1-aabe-0e426616ec0a","Type":"ContainerStarted","Data":"753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48"} Nov 21 21:31:34 crc kubenswrapper[4727]: I1121 21:31:34.782564 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-stq2s" podStartSLOduration=3.355474724 podStartE2EDuration="5.782540701s" podCreationTimestamp="2025-11-21 21:31:29 +0000 UTC" firstStartedPulling="2025-11-21 21:31:31.699577865 +0000 UTC m=+5096.885762929" lastFinishedPulling="2025-11-21 21:31:34.126643852 +0000 UTC m=+5099.312828906" observedRunningTime="2025-11-21 21:31:34.771615809 +0000 UTC m=+5099.957800853" watchObservedRunningTime="2025-11-21 21:31:34.782540701 +0000 UTC m=+5099.968725765" Nov 21 21:31:35 crc kubenswrapper[4727]: I1121 21:31:35.508649 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:31:35 crc kubenswrapper[4727]: E1121 21:31:35.509456 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:31:39 crc kubenswrapper[4727]: I1121 21:31:39.939144 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:39 crc kubenswrapper[4727]: I1121 21:31:39.939784 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:40 crc kubenswrapper[4727]: I1121 21:31:40.001062 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:40 crc kubenswrapper[4727]: I1121 21:31:40.921308 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:40 crc kubenswrapper[4727]: I1121 21:31:40.978303 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-stq2s"] Nov 21 21:31:42 crc kubenswrapper[4727]: I1121 21:31:42.849730 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-stq2s" podUID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerName="registry-server" containerID="cri-o://753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48" gracePeriod=2 Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.342786 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.357796 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-utilities\") pod \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.357879 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m86q6\" (UniqueName: \"kubernetes.io/projected/5b983132-0cbc-4ee1-aabe-0e426616ec0a-kube-api-access-m86q6\") pod \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.358033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-catalog-content\") pod \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\" (UID: \"5b983132-0cbc-4ee1-aabe-0e426616ec0a\") " Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.360506 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-utilities" (OuterVolumeSpecName: "utilities") pod "5b983132-0cbc-4ee1-aabe-0e426616ec0a" (UID: "5b983132-0cbc-4ee1-aabe-0e426616ec0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.364263 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b983132-0cbc-4ee1-aabe-0e426616ec0a-kube-api-access-m86q6" (OuterVolumeSpecName: "kube-api-access-m86q6") pod "5b983132-0cbc-4ee1-aabe-0e426616ec0a" (UID: "5b983132-0cbc-4ee1-aabe-0e426616ec0a"). InnerVolumeSpecName "kube-api-access-m86q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.417749 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b983132-0cbc-4ee1-aabe-0e426616ec0a" (UID: "5b983132-0cbc-4ee1-aabe-0e426616ec0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.460671 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.460946 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b983132-0cbc-4ee1-aabe-0e426616ec0a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.461047 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m86q6\" (UniqueName: \"kubernetes.io/projected/5b983132-0cbc-4ee1-aabe-0e426616ec0a-kube-api-access-m86q6\") on node \"crc\" DevicePath \"\"" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.861620 4727 generic.go:334] "Generic (PLEG): container finished" podID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerID="753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48" exitCode=0 Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.861667 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stq2s" event={"ID":"5b983132-0cbc-4ee1-aabe-0e426616ec0a","Type":"ContainerDied","Data":"753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48"} Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.861698 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stq2s" event={"ID":"5b983132-0cbc-4ee1-aabe-0e426616ec0a","Type":"ContainerDied","Data":"89e9dc41def17e4231a7962b2376895da95c6f72cac812c6b028f1cb1827e0b7"} Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.861696 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stq2s" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.861716 4727 scope.go:117] "RemoveContainer" containerID="753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.886994 4727 scope.go:117] "RemoveContainer" containerID="47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.889194 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-stq2s"] Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.901336 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-stq2s"] Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.915468 4727 scope.go:117] "RemoveContainer" containerID="c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.965695 4727 scope.go:117] "RemoveContainer" containerID="753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48" Nov 21 21:31:43 crc kubenswrapper[4727]: E1121 21:31:43.966201 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48\": container with ID starting with 753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48 not found: ID does not exist" containerID="753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.966233 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48"} err="failed to get container status \"753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48\": rpc error: code = NotFound desc = could not find container \"753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48\": container with ID starting with 753922f109a9385d6c4423259ae9512913559c52d9367bc22fad45c685d5bd48 not found: ID does not exist" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.966255 4727 scope.go:117] "RemoveContainer" containerID="47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f" Nov 21 21:31:43 crc kubenswrapper[4727]: E1121 21:31:43.966623 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f\": container with ID starting with 47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f not found: ID does not exist" containerID="47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.966667 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f"} err="failed to get container status \"47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f\": rpc error: code = NotFound desc = could not find container \"47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f\": container with ID starting with 47d752dcdc84c8a39becdb51f6f48c8ef8193357943f30a82a8fc554e560a63f not found: ID does not exist" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.966700 4727 scope.go:117] "RemoveContainer" containerID="c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6" Nov 21 21:31:43 crc kubenswrapper[4727]: E1121 21:31:43.967026 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6\": container with ID starting with c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6 not found: ID does not exist" containerID="c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6" Nov 21 21:31:43 crc kubenswrapper[4727]: I1121 21:31:43.967051 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6"} err="failed to get container status \"c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6\": rpc error: code = NotFound desc = could not find container \"c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6\": container with ID starting with c7efea6bddc0ba2372a6ae7d2fa96af7e1eaf655177a1103689c3a44676508a6 not found: ID does not exist" Nov 21 21:31:45 crc kubenswrapper[4727]: I1121 21:31:45.532588 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" path="/var/lib/kubelet/pods/5b983132-0cbc-4ee1-aabe-0e426616ec0a/volumes" Nov 21 21:31:47 crc kubenswrapper[4727]: I1121 21:31:47.500026 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:31:47 crc kubenswrapper[4727]: E1121 21:31:47.501077 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:32:02 crc kubenswrapper[4727]: I1121 21:32:02.500003 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:32:02 crc kubenswrapper[4727]: E1121 21:32:02.501456 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:32:13 crc kubenswrapper[4727]: I1121 21:32:13.499248 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:32:13 crc kubenswrapper[4727]: E1121 21:32:13.500474 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:32:26 crc kubenswrapper[4727]: I1121 21:32:26.499664 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:32:26 crc kubenswrapper[4727]: E1121 21:32:26.500503 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:32:41 crc kubenswrapper[4727]: I1121 21:32:41.500580 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:32:41 crc kubenswrapper[4727]: E1121 21:32:41.501893 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:32:52 crc kubenswrapper[4727]: I1121 21:32:52.499501 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:32:52 crc kubenswrapper[4727]: E1121 21:32:52.500696 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:33:05 crc kubenswrapper[4727]: I1121 21:33:05.508148 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:33:05 crc kubenswrapper[4727]: E1121 21:33:05.509650 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:33:17 crc kubenswrapper[4727]: I1121 21:33:17.499599 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:33:17 crc kubenswrapper[4727]: E1121 21:33:17.500358 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:33:31 crc kubenswrapper[4727]: I1121 21:33:31.499901 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:33:31 crc kubenswrapper[4727]: E1121 21:33:31.501521 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:33:44 crc kubenswrapper[4727]: I1121 21:33:44.500404 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:33:44 crc kubenswrapper[4727]: E1121 21:33:44.503014 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:33:58 crc kubenswrapper[4727]: I1121 21:33:58.499780 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:33:58 crc kubenswrapper[4727]: E1121 21:33:58.500420 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:34:13 crc kubenswrapper[4727]: I1121 21:34:13.499230 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:34:13 crc kubenswrapper[4727]: E1121 21:34:13.500349 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:34:27 crc kubenswrapper[4727]: I1121 21:34:27.500567 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:34:27 crc kubenswrapper[4727]: E1121 21:34:27.501864 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:34:41 crc kubenswrapper[4727]: I1121 21:34:41.499937 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:34:41 crc kubenswrapper[4727]: E1121 21:34:41.500712 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:34:54 crc kubenswrapper[4727]: I1121 21:34:54.499861 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:34:54 crc kubenswrapper[4727]: E1121 21:34:54.500938 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:35:06 crc kubenswrapper[4727]: I1121 21:35:06.500093 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:35:06 crc kubenswrapper[4727]: E1121 21:35:06.501215 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:35:21 crc kubenswrapper[4727]: I1121 21:35:21.500808 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:35:21 crc kubenswrapper[4727]: E1121 21:35:21.504599 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:35:36 crc kubenswrapper[4727]: I1121 21:35:36.499328 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:35:36 crc kubenswrapper[4727]: E1121 21:35:36.500313 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:35:49 crc kubenswrapper[4727]: I1121 21:35:49.503870 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:35:50 crc kubenswrapper[4727]: I1121 21:35:50.033798 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"3308ff8796b40d14ba9e0ed4c76ea5ae53d9c16a872866c1961ef970cbb58a80"} Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.634948 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mfzgx"] Nov 21 21:37:06 crc kubenswrapper[4727]: E1121 21:37:06.638854 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerName="extract-utilities" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.638892 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerName="extract-utilities" Nov 21 21:37:06 crc kubenswrapper[4727]: E1121 21:37:06.638917 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerName="extract-content" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.638928 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerName="extract-content" Nov 21 21:37:06 crc kubenswrapper[4727]: E1121 21:37:06.638989 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerName="registry-server" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.638997 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerName="registry-server" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.639371 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b983132-0cbc-4ee1-aabe-0e426616ec0a" containerName="registry-server" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.641788 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.668682 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mfzgx"] Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.758562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-catalog-content\") pod \"redhat-operators-mfzgx\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.758783 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97bcf\" (UniqueName: \"kubernetes.io/projected/48a44156-4628-4c13-b7dd-5a897c3172e0-kube-api-access-97bcf\") pod \"redhat-operators-mfzgx\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.758865 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-utilities\") pod \"redhat-operators-mfzgx\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.862742 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-catalog-content\") pod \"redhat-operators-mfzgx\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.863169 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97bcf\" (UniqueName: \"kubernetes.io/projected/48a44156-4628-4c13-b7dd-5a897c3172e0-kube-api-access-97bcf\") pod \"redhat-operators-mfzgx\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.863281 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-utilities\") pod \"redhat-operators-mfzgx\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.863280 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-catalog-content\") pod \"redhat-operators-mfzgx\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.863898 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-utilities\") pod \"redhat-operators-mfzgx\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.890844 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97bcf\" (UniqueName: \"kubernetes.io/projected/48a44156-4628-4c13-b7dd-5a897c3172e0-kube-api-access-97bcf\") pod \"redhat-operators-mfzgx\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:06 crc kubenswrapper[4727]: I1121 21:37:06.978714 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:07 crc kubenswrapper[4727]: I1121 21:37:07.444209 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mfzgx"] Nov 21 21:37:07 crc kubenswrapper[4727]: I1121 21:37:07.934113 4727 generic.go:334] "Generic (PLEG): container finished" podID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerID="9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214" exitCode=0 Nov 21 21:37:07 crc kubenswrapper[4727]: I1121 21:37:07.934223 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfzgx" event={"ID":"48a44156-4628-4c13-b7dd-5a897c3172e0","Type":"ContainerDied","Data":"9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214"} Nov 21 21:37:07 crc kubenswrapper[4727]: I1121 21:37:07.934715 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfzgx" event={"ID":"48a44156-4628-4c13-b7dd-5a897c3172e0","Type":"ContainerStarted","Data":"5883326b78da80f6a38f5a2d4b84d91167e48e2ca64674081f8f8c2ab14d5bb8"} Nov 21 21:37:07 crc kubenswrapper[4727]: I1121 21:37:07.936678 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 21:37:08 crc kubenswrapper[4727]: I1121 21:37:08.962519 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfzgx" event={"ID":"48a44156-4628-4c13-b7dd-5a897c3172e0","Type":"ContainerStarted","Data":"33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a"} Nov 21 21:37:13 crc kubenswrapper[4727]: I1121 21:37:13.014871 4727 generic.go:334] "Generic (PLEG): container finished" podID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerID="33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a" exitCode=0 Nov 21 21:37:13 crc kubenswrapper[4727]: I1121 21:37:13.015033 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfzgx" event={"ID":"48a44156-4628-4c13-b7dd-5a897c3172e0","Type":"ContainerDied","Data":"33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a"} Nov 21 21:37:14 crc kubenswrapper[4727]: I1121 21:37:14.029657 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfzgx" event={"ID":"48a44156-4628-4c13-b7dd-5a897c3172e0","Type":"ContainerStarted","Data":"0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e"} Nov 21 21:37:14 crc kubenswrapper[4727]: I1121 21:37:14.056588 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mfzgx" podStartSLOduration=2.59418285 podStartE2EDuration="8.056566122s" podCreationTimestamp="2025-11-21 21:37:06 +0000 UTC" firstStartedPulling="2025-11-21 21:37:07.936409989 +0000 UTC m=+5433.122595034" lastFinishedPulling="2025-11-21 21:37:13.398793262 +0000 UTC m=+5438.584978306" observedRunningTime="2025-11-21 21:37:14.046293936 +0000 UTC m=+5439.232478980" watchObservedRunningTime="2025-11-21 21:37:14.056566122 +0000 UTC m=+5439.242751176" Nov 21 21:37:16 crc kubenswrapper[4727]: I1121 21:37:16.979627 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:16 crc kubenswrapper[4727]: I1121 21:37:16.980260 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.038482 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mfzgx" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="registry-server" probeResult="failure" output=< Nov 21 21:37:18 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 21:37:18 crc kubenswrapper[4727]: > Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.047844 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxjt"] Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.051338 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.066154 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxjt"] Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.148242 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-utilities\") pod \"redhat-marketplace-jzxjt\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.148764 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2nzn\" (UniqueName: \"kubernetes.io/projected/b7220c41-908b-4b5c-9471-df6131575f22-kube-api-access-d2nzn\") pod \"redhat-marketplace-jzxjt\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.149031 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-catalog-content\") pod \"redhat-marketplace-jzxjt\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.252667 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-utilities\") pod \"redhat-marketplace-jzxjt\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.252906 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2nzn\" (UniqueName: \"kubernetes.io/projected/b7220c41-908b-4b5c-9471-df6131575f22-kube-api-access-d2nzn\") pod \"redhat-marketplace-jzxjt\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.253081 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-catalog-content\") pod \"redhat-marketplace-jzxjt\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.253446 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-utilities\") pod \"redhat-marketplace-jzxjt\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.253736 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-catalog-content\") pod \"redhat-marketplace-jzxjt\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.282088 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2nzn\" (UniqueName: \"kubernetes.io/projected/b7220c41-908b-4b5c-9471-df6131575f22-kube-api-access-d2nzn\") pod \"redhat-marketplace-jzxjt\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.378431 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:18 crc kubenswrapper[4727]: I1121 21:37:18.935037 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxjt"] Nov 21 21:37:19 crc kubenswrapper[4727]: I1121 21:37:19.103813 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxjt" event={"ID":"b7220c41-908b-4b5c-9471-df6131575f22","Type":"ContainerStarted","Data":"07ab38267a49ca085909d1a3457b1f552e996c6397534ffba82b8961c27ef679"} Nov 21 21:37:20 crc kubenswrapper[4727]: I1121 21:37:20.117734 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7220c41-908b-4b5c-9471-df6131575f22" containerID="a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429" exitCode=0 Nov 21 21:37:20 crc kubenswrapper[4727]: I1121 21:37:20.117848 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxjt" event={"ID":"b7220c41-908b-4b5c-9471-df6131575f22","Type":"ContainerDied","Data":"a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429"} Nov 21 21:37:21 crc kubenswrapper[4727]: I1121 21:37:21.137778 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxjt" event={"ID":"b7220c41-908b-4b5c-9471-df6131575f22","Type":"ContainerStarted","Data":"fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790"} Nov 21 21:37:22 crc kubenswrapper[4727]: I1121 21:37:22.155902 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7220c41-908b-4b5c-9471-df6131575f22" containerID="fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790" exitCode=0 Nov 21 21:37:22 crc kubenswrapper[4727]: I1121 21:37:22.156022 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxjt" event={"ID":"b7220c41-908b-4b5c-9471-df6131575f22","Type":"ContainerDied","Data":"fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790"} Nov 21 21:37:23 crc kubenswrapper[4727]: I1121 21:37:23.175683 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxjt" event={"ID":"b7220c41-908b-4b5c-9471-df6131575f22","Type":"ContainerStarted","Data":"68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e"} Nov 21 21:37:23 crc kubenswrapper[4727]: I1121 21:37:23.199017 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jzxjt" podStartSLOduration=2.760881874 podStartE2EDuration="5.198993401s" podCreationTimestamp="2025-11-21 21:37:18 +0000 UTC" firstStartedPulling="2025-11-21 21:37:20.121333802 +0000 UTC m=+5445.307518846" lastFinishedPulling="2025-11-21 21:37:22.559445329 +0000 UTC m=+5447.745630373" observedRunningTime="2025-11-21 21:37:23.196769577 +0000 UTC m=+5448.382954621" watchObservedRunningTime="2025-11-21 21:37:23.198993401 +0000 UTC m=+5448.385178455" Nov 21 21:37:28 crc kubenswrapper[4727]: I1121 21:37:28.378644 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:28 crc kubenswrapper[4727]: I1121 21:37:28.379560 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:28 crc kubenswrapper[4727]: I1121 21:37:28.435384 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:28 crc kubenswrapper[4727]: I1121 21:37:28.861352 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mfzgx" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="registry-server" probeResult="failure" output=< Nov 21 21:37:28 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 21:37:28 crc kubenswrapper[4727]: > Nov 21 21:37:29 crc kubenswrapper[4727]: I1121 21:37:29.320527 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:29 crc kubenswrapper[4727]: I1121 21:37:29.374537 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxjt"] Nov 21 21:37:31 crc kubenswrapper[4727]: I1121 21:37:31.277990 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jzxjt" podUID="b7220c41-908b-4b5c-9471-df6131575f22" containerName="registry-server" containerID="cri-o://68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e" gracePeriod=2 Nov 21 21:37:31 crc kubenswrapper[4727]: I1121 21:37:31.874022 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:31 crc kubenswrapper[4727]: I1121 21:37:31.976530 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-catalog-content\") pod \"b7220c41-908b-4b5c-9471-df6131575f22\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " Nov 21 21:37:31 crc kubenswrapper[4727]: I1121 21:37:31.976751 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-utilities\") pod \"b7220c41-908b-4b5c-9471-df6131575f22\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " Nov 21 21:37:31 crc kubenswrapper[4727]: I1121 21:37:31.976899 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2nzn\" (UniqueName: \"kubernetes.io/projected/b7220c41-908b-4b5c-9471-df6131575f22-kube-api-access-d2nzn\") pod \"b7220c41-908b-4b5c-9471-df6131575f22\" (UID: \"b7220c41-908b-4b5c-9471-df6131575f22\") " Nov 21 21:37:31 crc kubenswrapper[4727]: I1121 21:37:31.977563 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-utilities" (OuterVolumeSpecName: "utilities") pod "b7220c41-908b-4b5c-9471-df6131575f22" (UID: "b7220c41-908b-4b5c-9471-df6131575f22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:37:31 crc kubenswrapper[4727]: I1121 21:37:31.985804 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7220c41-908b-4b5c-9471-df6131575f22-kube-api-access-d2nzn" (OuterVolumeSpecName: "kube-api-access-d2nzn") pod "b7220c41-908b-4b5c-9471-df6131575f22" (UID: "b7220c41-908b-4b5c-9471-df6131575f22"). InnerVolumeSpecName "kube-api-access-d2nzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:31.999999 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7220c41-908b-4b5c-9471-df6131575f22" (UID: "b7220c41-908b-4b5c-9471-df6131575f22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.079586 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2nzn\" (UniqueName: \"kubernetes.io/projected/b7220c41-908b-4b5c-9471-df6131575f22-kube-api-access-d2nzn\") on node \"crc\" DevicePath \"\"" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.079630 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.079645 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7220c41-908b-4b5c-9471-df6131575f22-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.296689 4727 generic.go:334] "Generic (PLEG): container finished" podID="b7220c41-908b-4b5c-9471-df6131575f22" containerID="68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e" exitCode=0 Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.296798 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxjt" event={"ID":"b7220c41-908b-4b5c-9471-df6131575f22","Type":"ContainerDied","Data":"68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e"} Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.296859 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzxjt" event={"ID":"b7220c41-908b-4b5c-9471-df6131575f22","Type":"ContainerDied","Data":"07ab38267a49ca085909d1a3457b1f552e996c6397534ffba82b8961c27ef679"} Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.296884 4727 scope.go:117] "RemoveContainer" containerID="68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.296815 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzxjt" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.334530 4727 scope.go:117] "RemoveContainer" containerID="fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.350761 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxjt"] Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.365773 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzxjt"] Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.399216 4727 scope.go:117] "RemoveContainer" containerID="a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.423561 4727 scope.go:117] "RemoveContainer" containerID="68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e" Nov 21 21:37:32 crc kubenswrapper[4727]: E1121 21:37:32.426119 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e\": container with ID starting with 68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e not found: ID does not exist" containerID="68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.426164 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e"} err="failed to get container status \"68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e\": rpc error: code = NotFound desc = could not find container \"68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e\": container with ID starting with 68b49707ad9ee111266c00627d0e48290d76de69af9aa7a0d5df80f69ca39e0e not found: ID does not exist" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.426198 4727 scope.go:117] "RemoveContainer" containerID="fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790" Nov 21 21:37:32 crc kubenswrapper[4727]: E1121 21:37:32.426739 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790\": container with ID starting with fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790 not found: ID does not exist" containerID="fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.426794 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790"} err="failed to get container status \"fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790\": rpc error: code = NotFound desc = could not find container \"fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790\": container with ID starting with fc6891b2ad3ce06929e46fdd23d7713e73d1a05954c68112cd3e202c5861f790 not found: ID does not exist" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.426833 4727 scope.go:117] "RemoveContainer" containerID="a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429" Nov 21 21:37:32 crc kubenswrapper[4727]: E1121 21:37:32.427271 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429\": container with ID starting with a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429 not found: ID does not exist" containerID="a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429" Nov 21 21:37:32 crc kubenswrapper[4727]: I1121 21:37:32.427330 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429"} err="failed to get container status \"a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429\": rpc error: code = NotFound desc = could not find container \"a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429\": container with ID starting with a27ccb3e3b44f9b37f0f6115fbc54b510edf97dc1cea55fe1edfed313c441429 not found: ID does not exist" Nov 21 21:37:33 crc kubenswrapper[4727]: I1121 21:37:33.517656 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7220c41-908b-4b5c-9471-df6131575f22" path="/var/lib/kubelet/pods/b7220c41-908b-4b5c-9471-df6131575f22/volumes" Nov 21 21:37:38 crc kubenswrapper[4727]: I1121 21:37:38.028560 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mfzgx" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="registry-server" probeResult="failure" output=< Nov 21 21:37:38 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 21:37:38 crc kubenswrapper[4727]: > Nov 21 21:37:47 crc kubenswrapper[4727]: I1121 21:37:47.027878 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:47 crc kubenswrapper[4727]: I1121 21:37:47.094296 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:47 crc kubenswrapper[4727]: I1121 21:37:47.276695 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mfzgx"] Nov 21 21:37:48 crc kubenswrapper[4727]: I1121 21:37:48.482795 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mfzgx" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="registry-server" containerID="cri-o://0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e" gracePeriod=2 Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.070058 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.228392 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97bcf\" (UniqueName: \"kubernetes.io/projected/48a44156-4628-4c13-b7dd-5a897c3172e0-kube-api-access-97bcf\") pod \"48a44156-4628-4c13-b7dd-5a897c3172e0\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.228494 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-utilities\") pod \"48a44156-4628-4c13-b7dd-5a897c3172e0\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.228620 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-catalog-content\") pod \"48a44156-4628-4c13-b7dd-5a897c3172e0\" (UID: \"48a44156-4628-4c13-b7dd-5a897c3172e0\") " Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.229813 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-utilities" (OuterVolumeSpecName: "utilities") pod "48a44156-4628-4c13-b7dd-5a897c3172e0" (UID: "48a44156-4628-4c13-b7dd-5a897c3172e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.236675 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a44156-4628-4c13-b7dd-5a897c3172e0-kube-api-access-97bcf" (OuterVolumeSpecName: "kube-api-access-97bcf") pod "48a44156-4628-4c13-b7dd-5a897c3172e0" (UID: "48a44156-4628-4c13-b7dd-5a897c3172e0"). InnerVolumeSpecName "kube-api-access-97bcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.325165 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48a44156-4628-4c13-b7dd-5a897c3172e0" (UID: "48a44156-4628-4c13-b7dd-5a897c3172e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.331517 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97bcf\" (UniqueName: \"kubernetes.io/projected/48a44156-4628-4c13-b7dd-5a897c3172e0-kube-api-access-97bcf\") on node \"crc\" DevicePath \"\"" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.331566 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.331578 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a44156-4628-4c13-b7dd-5a897c3172e0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.513670 4727 generic.go:334] "Generic (PLEG): container finished" podID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerID="0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e" exitCode=0 Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.513770 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfzgx" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.521532 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfzgx" event={"ID":"48a44156-4628-4c13-b7dd-5a897c3172e0","Type":"ContainerDied","Data":"0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e"} Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.521587 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfzgx" event={"ID":"48a44156-4628-4c13-b7dd-5a897c3172e0","Type":"ContainerDied","Data":"5883326b78da80f6a38f5a2d4b84d91167e48e2ca64674081f8f8c2ab14d5bb8"} Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.521616 4727 scope.go:117] "RemoveContainer" containerID="0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.562724 4727 scope.go:117] "RemoveContainer" containerID="33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.572144 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mfzgx"] Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.595218 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mfzgx"] Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.610186 4727 scope.go:117] "RemoveContainer" containerID="9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.666624 4727 scope.go:117] "RemoveContainer" containerID="0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e" Nov 21 21:37:49 crc kubenswrapper[4727]: E1121 21:37:49.669134 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e\": container with ID starting with 0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e not found: ID does not exist" containerID="0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.669226 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e"} err="failed to get container status \"0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e\": rpc error: code = NotFound desc = could not find container \"0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e\": container with ID starting with 0d25612c77412ef81b519171898c413e2adb84cbc3e2534b3f3f59667cf7190e not found: ID does not exist" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.669257 4727 scope.go:117] "RemoveContainer" containerID="33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a" Nov 21 21:37:49 crc kubenswrapper[4727]: E1121 21:37:49.669630 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a\": container with ID starting with 33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a not found: ID does not exist" containerID="33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.669669 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a"} err="failed to get container status \"33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a\": rpc error: code = NotFound desc = could not find container \"33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a\": container with ID starting with 33f2ec889255dc0317b20dc9dde4d4de662a1edfba8be440bcd64bb531ac2e3a not found: ID does not exist" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.669705 4727 scope.go:117] "RemoveContainer" containerID="9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214" Nov 21 21:37:49 crc kubenswrapper[4727]: E1121 21:37:49.670000 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214\": container with ID starting with 9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214 not found: ID does not exist" containerID="9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214" Nov 21 21:37:49 crc kubenswrapper[4727]: I1121 21:37:49.670067 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214"} err="failed to get container status \"9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214\": rpc error: code = NotFound desc = could not find container \"9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214\": container with ID starting with 9758944a831ca8fb723e44211fca97e3b790e79c36b44cdaa009b76983671214 not found: ID does not exist" Nov 21 21:37:51 crc kubenswrapper[4727]: I1121 21:37:51.519604 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" path="/var/lib/kubelet/pods/48a44156-4628-4c13-b7dd-5a897c3172e0/volumes" Nov 21 21:38:13 crc kubenswrapper[4727]: I1121 21:38:13.335569 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:38:13 crc kubenswrapper[4727]: I1121 21:38:13.336248 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:38:43 crc kubenswrapper[4727]: I1121 21:38:43.335481 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:38:43 crc kubenswrapper[4727]: I1121 21:38:43.336421 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:39:13 crc kubenswrapper[4727]: I1121 21:39:13.336105 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:39:13 crc kubenswrapper[4727]: I1121 21:39:13.337244 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:39:13 crc kubenswrapper[4727]: I1121 21:39:13.337350 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 21:39:13 crc kubenswrapper[4727]: I1121 21:39:13.339334 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3308ff8796b40d14ba9e0ed4c76ea5ae53d9c16a872866c1961ef970cbb58a80"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 21:39:13 crc kubenswrapper[4727]: I1121 21:39:13.339470 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://3308ff8796b40d14ba9e0ed4c76ea5ae53d9c16a872866c1961ef970cbb58a80" gracePeriod=600 Nov 21 21:39:13 crc kubenswrapper[4727]: I1121 21:39:13.800903 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="3308ff8796b40d14ba9e0ed4c76ea5ae53d9c16a872866c1961ef970cbb58a80" exitCode=0 Nov 21 21:39:13 crc kubenswrapper[4727]: I1121 21:39:13.801176 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"3308ff8796b40d14ba9e0ed4c76ea5ae53d9c16a872866c1961ef970cbb58a80"} Nov 21 21:39:13 crc kubenswrapper[4727]: I1121 21:39:13.802098 4727 scope.go:117] "RemoveContainer" containerID="75b59943a9026952eacc506ea41581cd574e08a8954bdaa02591aa1e2e0e7872" Nov 21 21:39:14 crc kubenswrapper[4727]: I1121 21:39:14.816134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c"} Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.150183 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vx7q8"] Nov 21 21:40:29 crc kubenswrapper[4727]: E1121 21:40:29.151789 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="extract-utilities" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.151804 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="extract-utilities" Nov 21 21:40:29 crc kubenswrapper[4727]: E1121 21:40:29.151858 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7220c41-908b-4b5c-9471-df6131575f22" containerName="extract-content" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.151866 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7220c41-908b-4b5c-9471-df6131575f22" containerName="extract-content" Nov 21 21:40:29 crc kubenswrapper[4727]: E1121 21:40:29.151908 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7220c41-908b-4b5c-9471-df6131575f22" containerName="registry-server" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.151915 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7220c41-908b-4b5c-9471-df6131575f22" containerName="registry-server" Nov 21 21:40:29 crc kubenswrapper[4727]: E1121 21:40:29.151925 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7220c41-908b-4b5c-9471-df6131575f22" containerName="extract-utilities" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.151932 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7220c41-908b-4b5c-9471-df6131575f22" containerName="extract-utilities" Nov 21 21:40:29 crc kubenswrapper[4727]: E1121 21:40:29.151991 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="registry-server" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.152004 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="registry-server" Nov 21 21:40:29 crc kubenswrapper[4727]: E1121 21:40:29.152025 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="extract-content" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.152035 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="extract-content" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.152433 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7220c41-908b-4b5c-9471-df6131575f22" containerName="registry-server" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.152478 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a44156-4628-4c13-b7dd-5a897c3172e0" containerName="registry-server" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.155218 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.160432 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vx7q8"] Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.234001 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-catalog-content\") pod \"certified-operators-vx7q8\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.234283 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-utilities\") pod \"certified-operators-vx7q8\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.234649 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfl2b\" (UniqueName: \"kubernetes.io/projected/768f5ed8-902c-45a7-9909-bb30df856214-kube-api-access-tfl2b\") pod \"certified-operators-vx7q8\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.337641 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfl2b\" (UniqueName: \"kubernetes.io/projected/768f5ed8-902c-45a7-9909-bb30df856214-kube-api-access-tfl2b\") pod \"certified-operators-vx7q8\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.337749 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-catalog-content\") pod \"certified-operators-vx7q8\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.337931 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-utilities\") pod \"certified-operators-vx7q8\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.338439 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-catalog-content\") pod \"certified-operators-vx7q8\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.338602 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-utilities\") pod \"certified-operators-vx7q8\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.582764 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfl2b\" (UniqueName: \"kubernetes.io/projected/768f5ed8-902c-45a7-9909-bb30df856214-kube-api-access-tfl2b\") pod \"certified-operators-vx7q8\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:29 crc kubenswrapper[4727]: I1121 21:40:29.791134 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:30 crc kubenswrapper[4727]: I1121 21:40:30.413138 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vx7q8"] Nov 21 21:40:30 crc kubenswrapper[4727]: I1121 21:40:30.868778 4727 generic.go:334] "Generic (PLEG): container finished" podID="768f5ed8-902c-45a7-9909-bb30df856214" containerID="86445cdab9bc3b0fb0c195d46d845710c710770714091b959342d17d2a50d39f" exitCode=0 Nov 21 21:40:30 crc kubenswrapper[4727]: I1121 21:40:30.869093 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7q8" event={"ID":"768f5ed8-902c-45a7-9909-bb30df856214","Type":"ContainerDied","Data":"86445cdab9bc3b0fb0c195d46d845710c710770714091b959342d17d2a50d39f"} Nov 21 21:40:30 crc kubenswrapper[4727]: I1121 21:40:30.869307 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7q8" event={"ID":"768f5ed8-902c-45a7-9909-bb30df856214","Type":"ContainerStarted","Data":"047414efad12326f880b292ac8a044324f44d332d3617a2ae3b5b25958bddae2"} Nov 21 21:40:32 crc kubenswrapper[4727]: I1121 21:40:32.910006 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7q8" event={"ID":"768f5ed8-902c-45a7-9909-bb30df856214","Type":"ContainerStarted","Data":"3425877beab3742fe40af80ad9e2a83013d483e596f45b9ee53abc5ef2cf868a"} Nov 21 21:40:33 crc kubenswrapper[4727]: I1121 21:40:33.924948 4727 generic.go:334] "Generic (PLEG): container finished" podID="768f5ed8-902c-45a7-9909-bb30df856214" containerID="3425877beab3742fe40af80ad9e2a83013d483e596f45b9ee53abc5ef2cf868a" exitCode=0 Nov 21 21:40:33 crc kubenswrapper[4727]: I1121 21:40:33.925285 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7q8" event={"ID":"768f5ed8-902c-45a7-9909-bb30df856214","Type":"ContainerDied","Data":"3425877beab3742fe40af80ad9e2a83013d483e596f45b9ee53abc5ef2cf868a"} Nov 21 21:40:34 crc kubenswrapper[4727]: I1121 21:40:34.941403 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7q8" event={"ID":"768f5ed8-902c-45a7-9909-bb30df856214","Type":"ContainerStarted","Data":"546cf00e269232c310751db993f48c475f4dddf3ab5f9c26379ab572f7c139c3"} Nov 21 21:40:34 crc kubenswrapper[4727]: I1121 21:40:34.984569 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vx7q8" podStartSLOduration=2.541983212 podStartE2EDuration="5.984537297s" podCreationTimestamp="2025-11-21 21:40:29 +0000 UTC" firstStartedPulling="2025-11-21 21:40:30.871775173 +0000 UTC m=+5636.057960217" lastFinishedPulling="2025-11-21 21:40:34.314329248 +0000 UTC m=+5639.500514302" observedRunningTime="2025-11-21 21:40:34.972913347 +0000 UTC m=+5640.159098411" watchObservedRunningTime="2025-11-21 21:40:34.984537297 +0000 UTC m=+5640.170722341" Nov 21 21:40:39 crc kubenswrapper[4727]: I1121 21:40:39.792360 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:39 crc kubenswrapper[4727]: I1121 21:40:39.793104 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:39 crc kubenswrapper[4727]: I1121 21:40:39.868711 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:40 crc kubenswrapper[4727]: I1121 21:40:40.084600 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:40 crc kubenswrapper[4727]: I1121 21:40:40.155552 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vx7q8"] Nov 21 21:40:42 crc kubenswrapper[4727]: I1121 21:40:42.034706 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vx7q8" podUID="768f5ed8-902c-45a7-9909-bb30df856214" containerName="registry-server" containerID="cri-o://546cf00e269232c310751db993f48c475f4dddf3ab5f9c26379ab572f7c139c3" gracePeriod=2 Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.056211 4727 generic.go:334] "Generic (PLEG): container finished" podID="768f5ed8-902c-45a7-9909-bb30df856214" containerID="546cf00e269232c310751db993f48c475f4dddf3ab5f9c26379ab572f7c139c3" exitCode=0 Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.056280 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7q8" event={"ID":"768f5ed8-902c-45a7-9909-bb30df856214","Type":"ContainerDied","Data":"546cf00e269232c310751db993f48c475f4dddf3ab5f9c26379ab572f7c139c3"} Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.057131 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx7q8" event={"ID":"768f5ed8-902c-45a7-9909-bb30df856214","Type":"ContainerDied","Data":"047414efad12326f880b292ac8a044324f44d332d3617a2ae3b5b25958bddae2"} Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.057156 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="047414efad12326f880b292ac8a044324f44d332d3617a2ae3b5b25958bddae2" Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.127620 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.264904 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-utilities\") pod \"768f5ed8-902c-45a7-9909-bb30df856214\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.265019 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-catalog-content\") pod \"768f5ed8-902c-45a7-9909-bb30df856214\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.265238 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfl2b\" (UniqueName: \"kubernetes.io/projected/768f5ed8-902c-45a7-9909-bb30df856214-kube-api-access-tfl2b\") pod \"768f5ed8-902c-45a7-9909-bb30df856214\" (UID: \"768f5ed8-902c-45a7-9909-bb30df856214\") " Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.266065 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-utilities" (OuterVolumeSpecName: "utilities") pod "768f5ed8-902c-45a7-9909-bb30df856214" (UID: "768f5ed8-902c-45a7-9909-bb30df856214"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.266841 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.273795 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768f5ed8-902c-45a7-9909-bb30df856214-kube-api-access-tfl2b" (OuterVolumeSpecName: "kube-api-access-tfl2b") pod "768f5ed8-902c-45a7-9909-bb30df856214" (UID: "768f5ed8-902c-45a7-9909-bb30df856214"). InnerVolumeSpecName "kube-api-access-tfl2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.310096 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "768f5ed8-902c-45a7-9909-bb30df856214" (UID: "768f5ed8-902c-45a7-9909-bb30df856214"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.370265 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/768f5ed8-902c-45a7-9909-bb30df856214-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:40:43 crc kubenswrapper[4727]: I1121 21:40:43.370303 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfl2b\" (UniqueName: \"kubernetes.io/projected/768f5ed8-902c-45a7-9909-bb30df856214-kube-api-access-tfl2b\") on node \"crc\" DevicePath \"\"" Nov 21 21:40:44 crc kubenswrapper[4727]: I1121 21:40:44.069768 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx7q8" Nov 21 21:40:44 crc kubenswrapper[4727]: I1121 21:40:44.099232 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vx7q8"] Nov 21 21:40:44 crc kubenswrapper[4727]: I1121 21:40:44.110609 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vx7q8"] Nov 21 21:40:45 crc kubenswrapper[4727]: I1121 21:40:45.512906 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768f5ed8-902c-45a7-9909-bb30df856214" path="/var/lib/kubelet/pods/768f5ed8-902c-45a7-9909-bb30df856214/volumes" Nov 21 21:41:13 crc kubenswrapper[4727]: I1121 21:41:13.336142 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:41:13 crc kubenswrapper[4727]: I1121 21:41:13.336704 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:41:43 crc kubenswrapper[4727]: I1121 21:41:43.336115 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:41:43 crc kubenswrapper[4727]: I1121 21:41:43.336632 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:42:13 crc kubenswrapper[4727]: I1121 21:42:13.335190 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:42:13 crc kubenswrapper[4727]: I1121 21:42:13.336192 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:42:13 crc kubenswrapper[4727]: I1121 21:42:13.336274 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 21:42:13 crc kubenswrapper[4727]: I1121 21:42:13.337521 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 21:42:13 crc kubenswrapper[4727]: I1121 21:42:13.337626 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" gracePeriod=600 Nov 21 21:42:13 crc kubenswrapper[4727]: E1121 21:42:13.460665 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:42:14 crc kubenswrapper[4727]: I1121 21:42:14.258243 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" exitCode=0 Nov 21 21:42:14 crc kubenswrapper[4727]: I1121 21:42:14.258303 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c"} Nov 21 21:42:14 crc kubenswrapper[4727]: I1121 21:42:14.258357 4727 scope.go:117] "RemoveContainer" containerID="3308ff8796b40d14ba9e0ed4c76ea5ae53d9c16a872866c1961ef970cbb58a80" Nov 21 21:42:14 crc kubenswrapper[4727]: I1121 21:42:14.259935 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:42:14 crc kubenswrapper[4727]: E1121 21:42:14.260551 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:42:17 crc kubenswrapper[4727]: I1121 21:42:17.888702 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hpptp"] Nov 21 21:42:17 crc kubenswrapper[4727]: E1121 21:42:17.889762 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768f5ed8-902c-45a7-9909-bb30df856214" containerName="extract-utilities" Nov 21 21:42:17 crc kubenswrapper[4727]: I1121 21:42:17.889778 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="768f5ed8-902c-45a7-9909-bb30df856214" containerName="extract-utilities" Nov 21 21:42:17 crc kubenswrapper[4727]: E1121 21:42:17.889808 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768f5ed8-902c-45a7-9909-bb30df856214" containerName="registry-server" Nov 21 21:42:17 crc kubenswrapper[4727]: I1121 21:42:17.889815 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="768f5ed8-902c-45a7-9909-bb30df856214" containerName="registry-server" Nov 21 21:42:17 crc kubenswrapper[4727]: E1121 21:42:17.889837 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768f5ed8-902c-45a7-9909-bb30df856214" containerName="extract-content" Nov 21 21:42:17 crc kubenswrapper[4727]: I1121 21:42:17.889843 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="768f5ed8-902c-45a7-9909-bb30df856214" containerName="extract-content" Nov 21 21:42:17 crc kubenswrapper[4727]: I1121 21:42:17.890096 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="768f5ed8-902c-45a7-9909-bb30df856214" containerName="registry-server" Nov 21 21:42:17 crc kubenswrapper[4727]: I1121 21:42:17.891741 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:17 crc kubenswrapper[4727]: I1121 21:42:17.924133 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hpptp"] Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.013931 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-utilities\") pod \"community-operators-hpptp\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.014018 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw84j\" (UniqueName: \"kubernetes.io/projected/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-kube-api-access-rw84j\") pod \"community-operators-hpptp\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.014104 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-catalog-content\") pod \"community-operators-hpptp\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.116420 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-utilities\") pod \"community-operators-hpptp\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.116490 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw84j\" (UniqueName: \"kubernetes.io/projected/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-kube-api-access-rw84j\") pod \"community-operators-hpptp\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.116554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-catalog-content\") pod \"community-operators-hpptp\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.117270 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-utilities\") pod \"community-operators-hpptp\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.117341 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-catalog-content\") pod \"community-operators-hpptp\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.140823 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw84j\" (UniqueName: \"kubernetes.io/projected/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-kube-api-access-rw84j\") pod \"community-operators-hpptp\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.230888 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:18 crc kubenswrapper[4727]: I1121 21:42:18.816931 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hpptp"] Nov 21 21:42:19 crc kubenswrapper[4727]: I1121 21:42:19.336151 4727 generic.go:334] "Generic (PLEG): container finished" podID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerID="c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db" exitCode=0 Nov 21 21:42:19 crc kubenswrapper[4727]: I1121 21:42:19.336589 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpptp" event={"ID":"1d23b1ef-8e9e-409f-a50a-65d64e4293b5","Type":"ContainerDied","Data":"c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db"} Nov 21 21:42:19 crc kubenswrapper[4727]: I1121 21:42:19.336630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpptp" event={"ID":"1d23b1ef-8e9e-409f-a50a-65d64e4293b5","Type":"ContainerStarted","Data":"af838380e575abd0ae2cf937b6f35b3a8f5a175089d7558f3825b91542d6116d"} Nov 21 21:42:19 crc kubenswrapper[4727]: I1121 21:42:19.340653 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 21:42:19 crc kubenswrapper[4727]: E1121 21:42:19.361102 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d23b1ef_8e9e_409f_a50a_65d64e4293b5.slice/crio-conmon-c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d23b1ef_8e9e_409f_a50a_65d64e4293b5.slice/crio-c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db.scope\": RecentStats: unable to find data in memory cache]" Nov 21 21:42:20 crc kubenswrapper[4727]: I1121 21:42:20.351539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpptp" event={"ID":"1d23b1ef-8e9e-409f-a50a-65d64e4293b5","Type":"ContainerStarted","Data":"387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831"} Nov 21 21:42:21 crc kubenswrapper[4727]: I1121 21:42:21.368011 4727 generic.go:334] "Generic (PLEG): container finished" podID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerID="387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831" exitCode=0 Nov 21 21:42:21 crc kubenswrapper[4727]: I1121 21:42:21.368125 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpptp" event={"ID":"1d23b1ef-8e9e-409f-a50a-65d64e4293b5","Type":"ContainerDied","Data":"387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831"} Nov 21 21:42:22 crc kubenswrapper[4727]: I1121 21:42:22.387596 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpptp" event={"ID":"1d23b1ef-8e9e-409f-a50a-65d64e4293b5","Type":"ContainerStarted","Data":"12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d"} Nov 21 21:42:22 crc kubenswrapper[4727]: I1121 21:42:22.413546 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hpptp" podStartSLOduration=2.934037193 podStartE2EDuration="5.413527128s" podCreationTimestamp="2025-11-21 21:42:17 +0000 UTC" firstStartedPulling="2025-11-21 21:42:19.340079149 +0000 UTC m=+5744.526264203" lastFinishedPulling="2025-11-21 21:42:21.819569054 +0000 UTC m=+5747.005754138" observedRunningTime="2025-11-21 21:42:22.409138043 +0000 UTC m=+5747.595323127" watchObservedRunningTime="2025-11-21 21:42:22.413527128 +0000 UTC m=+5747.599712172" Nov 21 21:42:28 crc kubenswrapper[4727]: I1121 21:42:28.231910 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:28 crc kubenswrapper[4727]: I1121 21:42:28.232813 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:28 crc kubenswrapper[4727]: I1121 21:42:28.292809 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:28 crc kubenswrapper[4727]: I1121 21:42:28.500206 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:42:28 crc kubenswrapper[4727]: E1121 21:42:28.500508 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:42:28 crc kubenswrapper[4727]: I1121 21:42:28.512515 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:28 crc kubenswrapper[4727]: I1121 21:42:28.564013 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hpptp"] Nov 21 21:42:30 crc kubenswrapper[4727]: I1121 21:42:30.489625 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hpptp" podUID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerName="registry-server" containerID="cri-o://12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d" gracePeriod=2 Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.047422 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.213706 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-catalog-content\") pod \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.214090 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw84j\" (UniqueName: \"kubernetes.io/projected/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-kube-api-access-rw84j\") pod \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.214215 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-utilities\") pod \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\" (UID: \"1d23b1ef-8e9e-409f-a50a-65d64e4293b5\") " Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.214913 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-utilities" (OuterVolumeSpecName: "utilities") pod "1d23b1ef-8e9e-409f-a50a-65d64e4293b5" (UID: "1d23b1ef-8e9e-409f-a50a-65d64e4293b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.221755 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-kube-api-access-rw84j" (OuterVolumeSpecName: "kube-api-access-rw84j") pod "1d23b1ef-8e9e-409f-a50a-65d64e4293b5" (UID: "1d23b1ef-8e9e-409f-a50a-65d64e4293b5"). InnerVolumeSpecName "kube-api-access-rw84j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.280743 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d23b1ef-8e9e-409f-a50a-65d64e4293b5" (UID: "1d23b1ef-8e9e-409f-a50a-65d64e4293b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.317336 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw84j\" (UniqueName: \"kubernetes.io/projected/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-kube-api-access-rw84j\") on node \"crc\" DevicePath \"\"" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.317382 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.317399 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d23b1ef-8e9e-409f-a50a-65d64e4293b5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.504388 4727 generic.go:334] "Generic (PLEG): container finished" podID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerID="12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d" exitCode=0 Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.504525 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hpptp" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.523902 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpptp" event={"ID":"1d23b1ef-8e9e-409f-a50a-65d64e4293b5","Type":"ContainerDied","Data":"12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d"} Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.523949 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hpptp" event={"ID":"1d23b1ef-8e9e-409f-a50a-65d64e4293b5","Type":"ContainerDied","Data":"af838380e575abd0ae2cf937b6f35b3a8f5a175089d7558f3825b91542d6116d"} Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.524020 4727 scope.go:117] "RemoveContainer" containerID="12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.551657 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hpptp"] Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.563585 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hpptp"] Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.579772 4727 scope.go:117] "RemoveContainer" containerID="387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.607929 4727 scope.go:117] "RemoveContainer" containerID="c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.701789 4727 scope.go:117] "RemoveContainer" containerID="12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d" Nov 21 21:42:31 crc kubenswrapper[4727]: E1121 21:42:31.702412 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d\": container with ID starting with 12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d not found: ID does not exist" containerID="12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.702646 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d"} err="failed to get container status \"12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d\": rpc error: code = NotFound desc = could not find container \"12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d\": container with ID starting with 12759b46858bc0b74394aee41b25133ea4e208d4802056393c099a7ca5796e7d not found: ID does not exist" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.702780 4727 scope.go:117] "RemoveContainer" containerID="387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831" Nov 21 21:42:31 crc kubenswrapper[4727]: E1121 21:42:31.703463 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831\": container with ID starting with 387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831 not found: ID does not exist" containerID="387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.703497 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831"} err="failed to get container status \"387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831\": rpc error: code = NotFound desc = could not find container \"387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831\": container with ID starting with 387994852a85957b95bea3a8a78f9b5a3b9eb4307f5e178c6827c9fcdd114831 not found: ID does not exist" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.703523 4727 scope.go:117] "RemoveContainer" containerID="c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db" Nov 21 21:42:31 crc kubenswrapper[4727]: E1121 21:42:31.704081 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db\": container with ID starting with c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db not found: ID does not exist" containerID="c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db" Nov 21 21:42:31 crc kubenswrapper[4727]: I1121 21:42:31.704102 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db"} err="failed to get container status \"c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db\": rpc error: code = NotFound desc = could not find container \"c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db\": container with ID starting with c5dbf522553a77bbe446e948f1ae9f7502f80b58f1be1568427316d8bdd950db not found: ID does not exist" Nov 21 21:42:33 crc kubenswrapper[4727]: I1121 21:42:33.520896 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" path="/var/lib/kubelet/pods/1d23b1ef-8e9e-409f-a50a-65d64e4293b5/volumes" Nov 21 21:42:42 crc kubenswrapper[4727]: I1121 21:42:42.499942 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:42:42 crc kubenswrapper[4727]: E1121 21:42:42.501236 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:42:52 crc kubenswrapper[4727]: E1121 21:42:52.828164 4727 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:54814->38.102.83.179:43311: write tcp 38.102.83.179:54814->38.102.83.179:43311: write: broken pipe Nov 21 21:42:55 crc kubenswrapper[4727]: I1121 21:42:55.516521 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:42:55 crc kubenswrapper[4727]: E1121 21:42:55.517652 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:43:09 crc kubenswrapper[4727]: I1121 21:43:09.499503 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:43:09 crc kubenswrapper[4727]: E1121 21:43:09.500814 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:43:21 crc kubenswrapper[4727]: I1121 21:43:21.500339 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:43:21 crc kubenswrapper[4727]: E1121 21:43:21.501860 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:43:36 crc kubenswrapper[4727]: I1121 21:43:36.499879 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:43:36 crc kubenswrapper[4727]: E1121 21:43:36.500825 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:43:49 crc kubenswrapper[4727]: I1121 21:43:49.499672 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:43:49 crc kubenswrapper[4727]: E1121 21:43:49.502767 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:44:04 crc kubenswrapper[4727]: I1121 21:44:04.499762 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:44:04 crc kubenswrapper[4727]: E1121 21:44:04.501221 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:44:16 crc kubenswrapper[4727]: I1121 21:44:16.500611 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:44:16 crc kubenswrapper[4727]: E1121 21:44:16.501897 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:44:31 crc kubenswrapper[4727]: I1121 21:44:31.500680 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:44:31 crc kubenswrapper[4727]: E1121 21:44:31.502179 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:44:44 crc kubenswrapper[4727]: I1121 21:44:44.499317 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:44:44 crc kubenswrapper[4727]: E1121 21:44:44.500161 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:44:56 crc kubenswrapper[4727]: I1121 21:44:56.499545 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:44:56 crc kubenswrapper[4727]: E1121 21:44:56.500427 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.186550 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w"] Nov 21 21:45:00 crc kubenswrapper[4727]: E1121 21:45:00.187651 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerName="extract-content" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.187665 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerName="extract-content" Nov 21 21:45:00 crc kubenswrapper[4727]: E1121 21:45:00.187700 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerName="extract-utilities" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.187710 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerName="extract-utilities" Nov 21 21:45:00 crc kubenswrapper[4727]: E1121 21:45:00.187737 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerName="registry-server" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.187743 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerName="registry-server" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.187968 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d23b1ef-8e9e-409f-a50a-65d64e4293b5" containerName="registry-server" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.188978 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.191322 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.191814 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.200359 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e00909b-88bc-4f3b-8028-55c190b740bd-secret-volume\") pod \"collect-profiles-29396025-ndr7w\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.200610 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rvsn\" (UniqueName: \"kubernetes.io/projected/6e00909b-88bc-4f3b-8028-55c190b740bd-kube-api-access-8rvsn\") pod \"collect-profiles-29396025-ndr7w\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.200844 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e00909b-88bc-4f3b-8028-55c190b740bd-config-volume\") pod \"collect-profiles-29396025-ndr7w\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.201037 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w"] Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.303359 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rvsn\" (UniqueName: \"kubernetes.io/projected/6e00909b-88bc-4f3b-8028-55c190b740bd-kube-api-access-8rvsn\") pod \"collect-profiles-29396025-ndr7w\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.303519 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e00909b-88bc-4f3b-8028-55c190b740bd-config-volume\") pod \"collect-profiles-29396025-ndr7w\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.303565 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e00909b-88bc-4f3b-8028-55c190b740bd-secret-volume\") pod \"collect-profiles-29396025-ndr7w\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.304472 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e00909b-88bc-4f3b-8028-55c190b740bd-config-volume\") pod \"collect-profiles-29396025-ndr7w\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.683090 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e00909b-88bc-4f3b-8028-55c190b740bd-secret-volume\") pod \"collect-profiles-29396025-ndr7w\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.683268 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rvsn\" (UniqueName: \"kubernetes.io/projected/6e00909b-88bc-4f3b-8028-55c190b740bd-kube-api-access-8rvsn\") pod \"collect-profiles-29396025-ndr7w\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:00 crc kubenswrapper[4727]: I1121 21:45:00.822487 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:01 crc kubenswrapper[4727]: I1121 21:45:01.305136 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w"] Nov 21 21:45:01 crc kubenswrapper[4727]: I1121 21:45:01.587066 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" event={"ID":"6e00909b-88bc-4f3b-8028-55c190b740bd","Type":"ContainerStarted","Data":"9a8909f1097418d9de1086b35de7658a234aad962699abcb221978098370194f"} Nov 21 21:45:01 crc kubenswrapper[4727]: I1121 21:45:01.587155 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" event={"ID":"6e00909b-88bc-4f3b-8028-55c190b740bd","Type":"ContainerStarted","Data":"f5676c06ca9dc7204ec750c8ba2f5379a487965c19d592cfe78ff37822e76bbb"} Nov 21 21:45:01 crc kubenswrapper[4727]: I1121 21:45:01.621927 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" podStartSLOduration=1.6219060280000002 podStartE2EDuration="1.621906028s" podCreationTimestamp="2025-11-21 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 21:45:01.607565224 +0000 UTC m=+5906.793750258" watchObservedRunningTime="2025-11-21 21:45:01.621906028 +0000 UTC m=+5906.808091072" Nov 21 21:45:02 crc kubenswrapper[4727]: I1121 21:45:02.610068 4727 generic.go:334] "Generic (PLEG): container finished" podID="6e00909b-88bc-4f3b-8028-55c190b740bd" containerID="9a8909f1097418d9de1086b35de7658a234aad962699abcb221978098370194f" exitCode=0 Nov 21 21:45:02 crc kubenswrapper[4727]: I1121 21:45:02.610162 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" event={"ID":"6e00909b-88bc-4f3b-8028-55c190b740bd","Type":"ContainerDied","Data":"9a8909f1097418d9de1086b35de7658a234aad962699abcb221978098370194f"} Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.026858 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.095736 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rvsn\" (UniqueName: \"kubernetes.io/projected/6e00909b-88bc-4f3b-8028-55c190b740bd-kube-api-access-8rvsn\") pod \"6e00909b-88bc-4f3b-8028-55c190b740bd\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.096149 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e00909b-88bc-4f3b-8028-55c190b740bd-secret-volume\") pod \"6e00909b-88bc-4f3b-8028-55c190b740bd\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.096324 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e00909b-88bc-4f3b-8028-55c190b740bd-config-volume\") pod \"6e00909b-88bc-4f3b-8028-55c190b740bd\" (UID: \"6e00909b-88bc-4f3b-8028-55c190b740bd\") " Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.098949 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e00909b-88bc-4f3b-8028-55c190b740bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "6e00909b-88bc-4f3b-8028-55c190b740bd" (UID: "6e00909b-88bc-4f3b-8028-55c190b740bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.102306 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e00909b-88bc-4f3b-8028-55c190b740bd-kube-api-access-8rvsn" (OuterVolumeSpecName: "kube-api-access-8rvsn") pod "6e00909b-88bc-4f3b-8028-55c190b740bd" (UID: "6e00909b-88bc-4f3b-8028-55c190b740bd"). InnerVolumeSpecName "kube-api-access-8rvsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.107189 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e00909b-88bc-4f3b-8028-55c190b740bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6e00909b-88bc-4f3b-8028-55c190b740bd" (UID: "6e00909b-88bc-4f3b-8028-55c190b740bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.201059 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rvsn\" (UniqueName: \"kubernetes.io/projected/6e00909b-88bc-4f3b-8028-55c190b740bd-kube-api-access-8rvsn\") on node \"crc\" DevicePath \"\"" Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.201089 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e00909b-88bc-4f3b-8028-55c190b740bd-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.201099 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e00909b-88bc-4f3b-8028-55c190b740bd-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.389032 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds"] Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.400733 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395980-pbpds"] Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.633572 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" event={"ID":"6e00909b-88bc-4f3b-8028-55c190b740bd","Type":"ContainerDied","Data":"f5676c06ca9dc7204ec750c8ba2f5379a487965c19d592cfe78ff37822e76bbb"} Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.633613 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5676c06ca9dc7204ec750c8ba2f5379a487965c19d592cfe78ff37822e76bbb" Nov 21 21:45:04 crc kubenswrapper[4727]: I1121 21:45:04.633617 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w" Nov 21 21:45:05 crc kubenswrapper[4727]: I1121 21:45:05.520130 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b39516-c80b-4586-ab1d-e97af786d86a" path="/var/lib/kubelet/pods/a6b39516-c80b-4586-ab1d-e97af786d86a/volumes" Nov 21 21:45:10 crc kubenswrapper[4727]: I1121 21:45:10.501022 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:45:10 crc kubenswrapper[4727]: E1121 21:45:10.502030 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:45:25 crc kubenswrapper[4727]: I1121 21:45:25.517390 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:45:25 crc kubenswrapper[4727]: E1121 21:45:25.518484 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:45:37 crc kubenswrapper[4727]: I1121 21:45:37.499939 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:45:37 crc kubenswrapper[4727]: E1121 21:45:37.501130 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:45:48 crc kubenswrapper[4727]: I1121 21:45:48.500225 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:45:48 crc kubenswrapper[4727]: E1121 21:45:48.501474 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:45:49 crc kubenswrapper[4727]: I1121 21:45:49.232354 4727 scope.go:117] "RemoveContainer" containerID="24173bc3534416e5d6c0f960bbd252cb3955b5209a7add7d6695535a375d79e1" Nov 21 21:46:03 crc kubenswrapper[4727]: I1121 21:46:03.499284 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:46:03 crc kubenswrapper[4727]: E1121 21:46:03.500380 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:46:16 crc kubenswrapper[4727]: I1121 21:46:16.499250 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:46:16 crc kubenswrapper[4727]: E1121 21:46:16.500192 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:46:28 crc kubenswrapper[4727]: I1121 21:46:28.500760 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:46:28 crc kubenswrapper[4727]: E1121 21:46:28.501825 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:46:43 crc kubenswrapper[4727]: I1121 21:46:43.501115 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:46:43 crc kubenswrapper[4727]: E1121 21:46:43.502107 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:46:49 crc kubenswrapper[4727]: I1121 21:46:49.333835 4727 scope.go:117] "RemoveContainer" containerID="546cf00e269232c310751db993f48c475f4dddf3ab5f9c26379ab572f7c139c3" Nov 21 21:46:49 crc kubenswrapper[4727]: I1121 21:46:49.367192 4727 scope.go:117] "RemoveContainer" containerID="86445cdab9bc3b0fb0c195d46d845710c710770714091b959342d17d2a50d39f" Nov 21 21:46:49 crc kubenswrapper[4727]: I1121 21:46:49.409829 4727 scope.go:117] "RemoveContainer" containerID="3425877beab3742fe40af80ad9e2a83013d483e596f45b9ee53abc5ef2cf868a" Nov 21 21:46:57 crc kubenswrapper[4727]: I1121 21:46:57.499870 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:46:57 crc kubenswrapper[4727]: E1121 21:46:57.500943 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:47:09 crc kubenswrapper[4727]: I1121 21:47:09.499663 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:47:09 crc kubenswrapper[4727]: E1121 21:47:09.501168 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:47:21 crc kubenswrapper[4727]: I1121 21:47:21.501028 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:47:22 crc kubenswrapper[4727]: I1121 21:47:22.077324 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"fe6d2511e1d89189edbe7d9b94e069ef4261c4388aba56dcc414a3c022fd4190"} Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.798469 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ps686"] Nov 21 21:47:28 crc kubenswrapper[4727]: E1121 21:47:28.800069 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e00909b-88bc-4f3b-8028-55c190b740bd" containerName="collect-profiles" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.800099 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e00909b-88bc-4f3b-8028-55c190b740bd" containerName="collect-profiles" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.800456 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e00909b-88bc-4f3b-8028-55c190b740bd" containerName="collect-profiles" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.803369 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.825245 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ps686"] Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.857073 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-catalog-content\") pod \"redhat-marketplace-ps686\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.857337 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89cmr\" (UniqueName: \"kubernetes.io/projected/9543408e-a237-4efd-af18-25f10a721233-kube-api-access-89cmr\") pod \"redhat-marketplace-ps686\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.857504 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-utilities\") pod \"redhat-marketplace-ps686\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.960554 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-catalog-content\") pod \"redhat-marketplace-ps686\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.961091 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89cmr\" (UniqueName: \"kubernetes.io/projected/9543408e-a237-4efd-af18-25f10a721233-kube-api-access-89cmr\") pod \"redhat-marketplace-ps686\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.961260 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-utilities\") pod \"redhat-marketplace-ps686\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.961281 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-catalog-content\") pod \"redhat-marketplace-ps686\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.961572 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-utilities\") pod \"redhat-marketplace-ps686\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:28 crc kubenswrapper[4727]: I1121 21:47:28.988581 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89cmr\" (UniqueName: \"kubernetes.io/projected/9543408e-a237-4efd-af18-25f10a721233-kube-api-access-89cmr\") pod \"redhat-marketplace-ps686\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:29 crc kubenswrapper[4727]: I1121 21:47:29.151100 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:29 crc kubenswrapper[4727]: I1121 21:47:29.704128 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ps686"] Nov 21 21:47:30 crc kubenswrapper[4727]: I1121 21:47:30.198707 4727 generic.go:334] "Generic (PLEG): container finished" podID="9543408e-a237-4efd-af18-25f10a721233" containerID="23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10" exitCode=0 Nov 21 21:47:30 crc kubenswrapper[4727]: I1121 21:47:30.198874 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ps686" event={"ID":"9543408e-a237-4efd-af18-25f10a721233","Type":"ContainerDied","Data":"23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10"} Nov 21 21:47:30 crc kubenswrapper[4727]: I1121 21:47:30.199155 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ps686" event={"ID":"9543408e-a237-4efd-af18-25f10a721233","Type":"ContainerStarted","Data":"efaa7001582ee64062b99eb8d01501218375d017b7e205e608eabeabd5642350"} Nov 21 21:47:30 crc kubenswrapper[4727]: I1121 21:47:30.202121 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 21:47:32 crc kubenswrapper[4727]: I1121 21:47:32.228278 4727 generic.go:334] "Generic (PLEG): container finished" podID="9543408e-a237-4efd-af18-25f10a721233" containerID="c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e" exitCode=0 Nov 21 21:47:32 crc kubenswrapper[4727]: I1121 21:47:32.228337 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ps686" event={"ID":"9543408e-a237-4efd-af18-25f10a721233","Type":"ContainerDied","Data":"c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e"} Nov 21 21:47:33 crc kubenswrapper[4727]: I1121 21:47:33.248109 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ps686" event={"ID":"9543408e-a237-4efd-af18-25f10a721233","Type":"ContainerStarted","Data":"077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10"} Nov 21 21:47:33 crc kubenswrapper[4727]: I1121 21:47:33.284353 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ps686" podStartSLOduration=2.871752512 podStartE2EDuration="5.284318631s" podCreationTimestamp="2025-11-21 21:47:28 +0000 UTC" firstStartedPulling="2025-11-21 21:47:30.201812245 +0000 UTC m=+6055.387997289" lastFinishedPulling="2025-11-21 21:47:32.614378364 +0000 UTC m=+6057.800563408" observedRunningTime="2025-11-21 21:47:33.273118072 +0000 UTC m=+6058.459303146" watchObservedRunningTime="2025-11-21 21:47:33.284318631 +0000 UTC m=+6058.470503705" Nov 21 21:47:39 crc kubenswrapper[4727]: I1121 21:47:39.152197 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:39 crc kubenswrapper[4727]: I1121 21:47:39.152874 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:39 crc kubenswrapper[4727]: I1121 21:47:39.215089 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:39 crc kubenswrapper[4727]: I1121 21:47:39.403793 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:39 crc kubenswrapper[4727]: I1121 21:47:39.469777 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ps686"] Nov 21 21:47:41 crc kubenswrapper[4727]: I1121 21:47:41.361323 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ps686" podUID="9543408e-a237-4efd-af18-25f10a721233" containerName="registry-server" containerID="cri-o://077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10" gracePeriod=2 Nov 21 21:47:41 crc kubenswrapper[4727]: I1121 21:47:41.919177 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.037611 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-catalog-content\") pod \"9543408e-a237-4efd-af18-25f10a721233\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.037845 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-utilities\") pod \"9543408e-a237-4efd-af18-25f10a721233\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.038187 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89cmr\" (UniqueName: \"kubernetes.io/projected/9543408e-a237-4efd-af18-25f10a721233-kube-api-access-89cmr\") pod \"9543408e-a237-4efd-af18-25f10a721233\" (UID: \"9543408e-a237-4efd-af18-25f10a721233\") " Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.038871 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-utilities" (OuterVolumeSpecName: "utilities") pod "9543408e-a237-4efd-af18-25f10a721233" (UID: "9543408e-a237-4efd-af18-25f10a721233"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.044994 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9543408e-a237-4efd-af18-25f10a721233-kube-api-access-89cmr" (OuterVolumeSpecName: "kube-api-access-89cmr") pod "9543408e-a237-4efd-af18-25f10a721233" (UID: "9543408e-a237-4efd-af18-25f10a721233"). InnerVolumeSpecName "kube-api-access-89cmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.054190 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9543408e-a237-4efd-af18-25f10a721233" (UID: "9543408e-a237-4efd-af18-25f10a721233"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.141731 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89cmr\" (UniqueName: \"kubernetes.io/projected/9543408e-a237-4efd-af18-25f10a721233-kube-api-access-89cmr\") on node \"crc\" DevicePath \"\"" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.141802 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.141832 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9543408e-a237-4efd-af18-25f10a721233-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.379935 4727 generic.go:334] "Generic (PLEG): container finished" podID="9543408e-a237-4efd-af18-25f10a721233" containerID="077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10" exitCode=0 Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.380060 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ps686" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.380099 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ps686" event={"ID":"9543408e-a237-4efd-af18-25f10a721233","Type":"ContainerDied","Data":"077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10"} Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.380494 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ps686" event={"ID":"9543408e-a237-4efd-af18-25f10a721233","Type":"ContainerDied","Data":"efaa7001582ee64062b99eb8d01501218375d017b7e205e608eabeabd5642350"} Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.380533 4727 scope.go:117] "RemoveContainer" containerID="077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.430516 4727 scope.go:117] "RemoveContainer" containerID="c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.446022 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ps686"] Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.463347 4727 scope.go:117] "RemoveContainer" containerID="23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.465946 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ps686"] Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.528542 4727 scope.go:117] "RemoveContainer" containerID="077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10" Nov 21 21:47:42 crc kubenswrapper[4727]: E1121 21:47:42.530170 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10\": container with ID starting with 077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10 not found: ID does not exist" containerID="077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.530478 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10"} err="failed to get container status \"077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10\": rpc error: code = NotFound desc = could not find container \"077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10\": container with ID starting with 077d3468af6aaed8ae9c557122b143f21b89d1008fb409d440121c621c478a10 not found: ID does not exist" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.530700 4727 scope.go:117] "RemoveContainer" containerID="c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e" Nov 21 21:47:42 crc kubenswrapper[4727]: E1121 21:47:42.531542 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e\": container with ID starting with c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e not found: ID does not exist" containerID="c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.531604 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e"} err="failed to get container status \"c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e\": rpc error: code = NotFound desc = could not find container \"c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e\": container with ID starting with c6ca442f00144605ea9a89d2f5a45ed19ed225c734475284e41731bf3123217e not found: ID does not exist" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.531646 4727 scope.go:117] "RemoveContainer" containerID="23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10" Nov 21 21:47:42 crc kubenswrapper[4727]: E1121 21:47:42.532102 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10\": container with ID starting with 23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10 not found: ID does not exist" containerID="23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10" Nov 21 21:47:42 crc kubenswrapper[4727]: I1121 21:47:42.532154 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10"} err="failed to get container status \"23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10\": rpc error: code = NotFound desc = could not find container \"23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10\": container with ID starting with 23051ba4a79c71e0ffcb759a8ee2b88cc3ea937f72cd47f5af29278f47e73a10 not found: ID does not exist" Nov 21 21:47:43 crc kubenswrapper[4727]: I1121 21:47:43.526353 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9543408e-a237-4efd-af18-25f10a721233" path="/var/lib/kubelet/pods/9543408e-a237-4efd-af18-25f10a721233/volumes" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.158328 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sbw5k"] Nov 21 21:48:59 crc kubenswrapper[4727]: E1121 21:48:59.161297 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9543408e-a237-4efd-af18-25f10a721233" containerName="extract-utilities" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.161452 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9543408e-a237-4efd-af18-25f10a721233" containerName="extract-utilities" Nov 21 21:48:59 crc kubenswrapper[4727]: E1121 21:48:59.161569 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9543408e-a237-4efd-af18-25f10a721233" containerName="registry-server" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.162023 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9543408e-a237-4efd-af18-25f10a721233" containerName="registry-server" Nov 21 21:48:59 crc kubenswrapper[4727]: E1121 21:48:59.162241 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9543408e-a237-4efd-af18-25f10a721233" containerName="extract-content" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.162368 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9543408e-a237-4efd-af18-25f10a721233" containerName="extract-content" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.163002 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9543408e-a237-4efd-af18-25f10a721233" containerName="registry-server" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.165582 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.173828 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbw5k"] Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.231367 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4mgh\" (UniqueName: \"kubernetes.io/projected/7b3dddd5-81c1-408d-b10b-fded07c948dd-kube-api-access-r4mgh\") pod \"redhat-operators-sbw5k\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.231452 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-catalog-content\") pod \"redhat-operators-sbw5k\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.231929 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-utilities\") pod \"redhat-operators-sbw5k\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.335258 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4mgh\" (UniqueName: \"kubernetes.io/projected/7b3dddd5-81c1-408d-b10b-fded07c948dd-kube-api-access-r4mgh\") pod \"redhat-operators-sbw5k\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.335342 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-catalog-content\") pod \"redhat-operators-sbw5k\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.335425 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-utilities\") pod \"redhat-operators-sbw5k\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.336137 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-catalog-content\") pod \"redhat-operators-sbw5k\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.336154 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-utilities\") pod \"redhat-operators-sbw5k\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.361131 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4mgh\" (UniqueName: \"kubernetes.io/projected/7b3dddd5-81c1-408d-b10b-fded07c948dd-kube-api-access-r4mgh\") pod \"redhat-operators-sbw5k\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:48:59 crc kubenswrapper[4727]: I1121 21:48:59.502458 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:49:00 crc kubenswrapper[4727]: I1121 21:49:00.059377 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbw5k"] Nov 21 21:49:00 crc kubenswrapper[4727]: I1121 21:49:00.687291 4727 generic.go:334] "Generic (PLEG): container finished" podID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerID="d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51" exitCode=0 Nov 21 21:49:00 crc kubenswrapper[4727]: I1121 21:49:00.687781 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbw5k" event={"ID":"7b3dddd5-81c1-408d-b10b-fded07c948dd","Type":"ContainerDied","Data":"d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51"} Nov 21 21:49:00 crc kubenswrapper[4727]: I1121 21:49:00.687847 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbw5k" event={"ID":"7b3dddd5-81c1-408d-b10b-fded07c948dd","Type":"ContainerStarted","Data":"3049bf3c881f6f5845a345c14aca024426ac082fc5330441e5a7973b891eea53"} Nov 21 21:49:01 crc kubenswrapper[4727]: I1121 21:49:01.712157 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbw5k" event={"ID":"7b3dddd5-81c1-408d-b10b-fded07c948dd","Type":"ContainerStarted","Data":"1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126"} Nov 21 21:49:05 crc kubenswrapper[4727]: I1121 21:49:05.761593 4727 generic.go:334] "Generic (PLEG): container finished" podID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerID="1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126" exitCode=0 Nov 21 21:49:05 crc kubenswrapper[4727]: I1121 21:49:05.761707 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbw5k" event={"ID":"7b3dddd5-81c1-408d-b10b-fded07c948dd","Type":"ContainerDied","Data":"1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126"} Nov 21 21:49:06 crc kubenswrapper[4727]: I1121 21:49:06.780239 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbw5k" event={"ID":"7b3dddd5-81c1-408d-b10b-fded07c948dd","Type":"ContainerStarted","Data":"a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f"} Nov 21 21:49:06 crc kubenswrapper[4727]: I1121 21:49:06.811238 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sbw5k" podStartSLOduration=2.350770249 podStartE2EDuration="7.811213753s" podCreationTimestamp="2025-11-21 21:48:59 +0000 UTC" firstStartedPulling="2025-11-21 21:49:00.696379614 +0000 UTC m=+6145.882564658" lastFinishedPulling="2025-11-21 21:49:06.156823118 +0000 UTC m=+6151.343008162" observedRunningTime="2025-11-21 21:49:06.807527955 +0000 UTC m=+6151.993712999" watchObservedRunningTime="2025-11-21 21:49:06.811213753 +0000 UTC m=+6151.997398797" Nov 21 21:49:09 crc kubenswrapper[4727]: I1121 21:49:09.517589 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:49:09 crc kubenswrapper[4727]: I1121 21:49:09.518359 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:49:10 crc kubenswrapper[4727]: I1121 21:49:10.580285 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sbw5k" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerName="registry-server" probeResult="failure" output=< Nov 21 21:49:10 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 21:49:10 crc kubenswrapper[4727]: > Nov 21 21:49:20 crc kubenswrapper[4727]: I1121 21:49:20.559777 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sbw5k" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerName="registry-server" probeResult="failure" output=< Nov 21 21:49:20 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 21:49:20 crc kubenswrapper[4727]: > Nov 21 21:49:29 crc kubenswrapper[4727]: I1121 21:49:29.577939 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:49:29 crc kubenswrapper[4727]: I1121 21:49:29.632488 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:49:30 crc kubenswrapper[4727]: I1121 21:49:30.357063 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sbw5k"] Nov 21 21:49:31 crc kubenswrapper[4727]: I1121 21:49:31.136724 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sbw5k" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerName="registry-server" containerID="cri-o://a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f" gracePeriod=2 Nov 21 21:49:31 crc kubenswrapper[4727]: I1121 21:49:31.857252 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:49:31 crc kubenswrapper[4727]: I1121 21:49:31.982347 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-catalog-content\") pod \"7b3dddd5-81c1-408d-b10b-fded07c948dd\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " Nov 21 21:49:31 crc kubenswrapper[4727]: I1121 21:49:31.983152 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-utilities\") pod \"7b3dddd5-81c1-408d-b10b-fded07c948dd\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " Nov 21 21:49:31 crc kubenswrapper[4727]: I1121 21:49:31.983213 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4mgh\" (UniqueName: \"kubernetes.io/projected/7b3dddd5-81c1-408d-b10b-fded07c948dd-kube-api-access-r4mgh\") pod \"7b3dddd5-81c1-408d-b10b-fded07c948dd\" (UID: \"7b3dddd5-81c1-408d-b10b-fded07c948dd\") " Nov 21 21:49:31 crc kubenswrapper[4727]: I1121 21:49:31.988874 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-utilities" (OuterVolumeSpecName: "utilities") pod "7b3dddd5-81c1-408d-b10b-fded07c948dd" (UID: "7b3dddd5-81c1-408d-b10b-fded07c948dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.008402 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3dddd5-81c1-408d-b10b-fded07c948dd-kube-api-access-r4mgh" (OuterVolumeSpecName: "kube-api-access-r4mgh") pod "7b3dddd5-81c1-408d-b10b-fded07c948dd" (UID: "7b3dddd5-81c1-408d-b10b-fded07c948dd"). InnerVolumeSpecName "kube-api-access-r4mgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.086358 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.086409 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4mgh\" (UniqueName: \"kubernetes.io/projected/7b3dddd5-81c1-408d-b10b-fded07c948dd-kube-api-access-r4mgh\") on node \"crc\" DevicePath \"\"" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.086596 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b3dddd5-81c1-408d-b10b-fded07c948dd" (UID: "7b3dddd5-81c1-408d-b10b-fded07c948dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.151436 4727 generic.go:334] "Generic (PLEG): container finished" podID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerID="a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f" exitCode=0 Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.151785 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbw5k" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.151791 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbw5k" event={"ID":"7b3dddd5-81c1-408d-b10b-fded07c948dd","Type":"ContainerDied","Data":"a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f"} Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.153012 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbw5k" event={"ID":"7b3dddd5-81c1-408d-b10b-fded07c948dd","Type":"ContainerDied","Data":"3049bf3c881f6f5845a345c14aca024426ac082fc5330441e5a7973b891eea53"} Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.153039 4727 scope.go:117] "RemoveContainer" containerID="a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.181235 4727 scope.go:117] "RemoveContainer" containerID="1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.198943 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3dddd5-81c1-408d-b10b-fded07c948dd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.206653 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sbw5k"] Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.217750 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sbw5k"] Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.229092 4727 scope.go:117] "RemoveContainer" containerID="d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.266402 4727 scope.go:117] "RemoveContainer" containerID="a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f" Nov 21 21:49:32 crc kubenswrapper[4727]: E1121 21:49:32.267135 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f\": container with ID starting with a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f not found: ID does not exist" containerID="a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.267169 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f"} err="failed to get container status \"a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f\": rpc error: code = NotFound desc = could not find container \"a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f\": container with ID starting with a7e87cdc827130d6cb8b8a0c8ce1b58e824bd206b0482d68ea2e02c58e372a7f not found: ID does not exist" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.267193 4727 scope.go:117] "RemoveContainer" containerID="1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126" Nov 21 21:49:32 crc kubenswrapper[4727]: E1121 21:49:32.267568 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126\": container with ID starting with 1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126 not found: ID does not exist" containerID="1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.267596 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126"} err="failed to get container status \"1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126\": rpc error: code = NotFound desc = could not find container \"1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126\": container with ID starting with 1ced26e37142e8e985cdff41a7d2d22f9cbe22d27e79243c4de0b87a1ed9f126 not found: ID does not exist" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.267609 4727 scope.go:117] "RemoveContainer" containerID="d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51" Nov 21 21:49:32 crc kubenswrapper[4727]: E1121 21:49:32.267920 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51\": container with ID starting with d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51 not found: ID does not exist" containerID="d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51" Nov 21 21:49:32 crc kubenswrapper[4727]: I1121 21:49:32.267953 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51"} err="failed to get container status \"d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51\": rpc error: code = NotFound desc = could not find container \"d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51\": container with ID starting with d6450d329144f064af4a1aa373d1ef5db15f0a4b14e3048a1b4ce412f987fd51 not found: ID does not exist" Nov 21 21:49:33 crc kubenswrapper[4727]: I1121 21:49:33.513150 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" path="/var/lib/kubelet/pods/7b3dddd5-81c1-408d-b10b-fded07c948dd/volumes" Nov 21 21:49:43 crc kubenswrapper[4727]: I1121 21:49:43.335585 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:49:43 crc kubenswrapper[4727]: I1121 21:49:43.336331 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:50:13 crc kubenswrapper[4727]: I1121 21:50:13.337000 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:50:13 crc kubenswrapper[4727]: I1121 21:50:13.338446 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:50:43 crc kubenswrapper[4727]: I1121 21:50:43.335702 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:50:43 crc kubenswrapper[4727]: I1121 21:50:43.336530 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:50:43 crc kubenswrapper[4727]: I1121 21:50:43.336587 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 21:50:43 crc kubenswrapper[4727]: I1121 21:50:43.337873 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe6d2511e1d89189edbe7d9b94e069ef4261c4388aba56dcc414a3c022fd4190"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 21:50:43 crc kubenswrapper[4727]: I1121 21:50:43.337976 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://fe6d2511e1d89189edbe7d9b94e069ef4261c4388aba56dcc414a3c022fd4190" gracePeriod=600 Nov 21 21:50:44 crc kubenswrapper[4727]: I1121 21:50:44.277856 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="fe6d2511e1d89189edbe7d9b94e069ef4261c4388aba56dcc414a3c022fd4190" exitCode=0 Nov 21 21:50:44 crc kubenswrapper[4727]: I1121 21:50:44.277920 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"fe6d2511e1d89189edbe7d9b94e069ef4261c4388aba56dcc414a3c022fd4190"} Nov 21 21:50:44 crc kubenswrapper[4727]: I1121 21:50:44.278610 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83"} Nov 21 21:50:44 crc kubenswrapper[4727]: I1121 21:50:44.278643 4727 scope.go:117] "RemoveContainer" containerID="b1ced324650334edb40de98d548ca08205e90278ed36c4d6e56bd51e043a410c" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.582698 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8lr7z"] Nov 21 21:51:03 crc kubenswrapper[4727]: E1121 21:51:03.584389 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerName="registry-server" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.584412 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerName="registry-server" Nov 21 21:51:03 crc kubenswrapper[4727]: E1121 21:51:03.584459 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerName="extract-content" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.584467 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerName="extract-content" Nov 21 21:51:03 crc kubenswrapper[4727]: E1121 21:51:03.584480 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerName="extract-utilities" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.584490 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerName="extract-utilities" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.584808 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3dddd5-81c1-408d-b10b-fded07c948dd" containerName="registry-server" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.588788 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.604129 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lr7z"] Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.662460 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snzq7\" (UniqueName: \"kubernetes.io/projected/7e876045-0098-4341-9a38-481b301f0340-kube-api-access-snzq7\") pod \"certified-operators-8lr7z\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.662513 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-utilities\") pod \"certified-operators-8lr7z\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.662690 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-catalog-content\") pod \"certified-operators-8lr7z\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.765322 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snzq7\" (UniqueName: \"kubernetes.io/projected/7e876045-0098-4341-9a38-481b301f0340-kube-api-access-snzq7\") pod \"certified-operators-8lr7z\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.765407 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-utilities\") pod \"certified-operators-8lr7z\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.765619 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-catalog-content\") pod \"certified-operators-8lr7z\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.766483 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-catalog-content\") pod \"certified-operators-8lr7z\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.766814 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-utilities\") pod \"certified-operators-8lr7z\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.795799 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snzq7\" (UniqueName: \"kubernetes.io/projected/7e876045-0098-4341-9a38-481b301f0340-kube-api-access-snzq7\") pod \"certified-operators-8lr7z\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:03 crc kubenswrapper[4727]: I1121 21:51:03.937853 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:05 crc kubenswrapper[4727]: I1121 21:51:05.381281 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lr7z"] Nov 21 21:51:05 crc kubenswrapper[4727]: I1121 21:51:05.607586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lr7z" event={"ID":"7e876045-0098-4341-9a38-481b301f0340","Type":"ContainerStarted","Data":"1318b3e17fb50f7773e7b91e7de207274a9bd0c908a7d1a5fabb3fbf06414c03"} Nov 21 21:51:06 crc kubenswrapper[4727]: I1121 21:51:06.622901 4727 generic.go:334] "Generic (PLEG): container finished" podID="7e876045-0098-4341-9a38-481b301f0340" containerID="1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71" exitCode=0 Nov 21 21:51:06 crc kubenswrapper[4727]: I1121 21:51:06.623423 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lr7z" event={"ID":"7e876045-0098-4341-9a38-481b301f0340","Type":"ContainerDied","Data":"1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71"} Nov 21 21:51:08 crc kubenswrapper[4727]: I1121 21:51:08.671313 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lr7z" event={"ID":"7e876045-0098-4341-9a38-481b301f0340","Type":"ContainerStarted","Data":"5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2"} Nov 21 21:51:09 crc kubenswrapper[4727]: I1121 21:51:09.687832 4727 generic.go:334] "Generic (PLEG): container finished" podID="7e876045-0098-4341-9a38-481b301f0340" containerID="5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2" exitCode=0 Nov 21 21:51:09 crc kubenswrapper[4727]: I1121 21:51:09.687897 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lr7z" event={"ID":"7e876045-0098-4341-9a38-481b301f0340","Type":"ContainerDied","Data":"5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2"} Nov 21 21:51:10 crc kubenswrapper[4727]: I1121 21:51:10.707314 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lr7z" event={"ID":"7e876045-0098-4341-9a38-481b301f0340","Type":"ContainerStarted","Data":"b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4"} Nov 21 21:51:10 crc kubenswrapper[4727]: I1121 21:51:10.737034 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8lr7z" podStartSLOduration=4.27060443 podStartE2EDuration="7.73699955s" podCreationTimestamp="2025-11-21 21:51:03 +0000 UTC" firstStartedPulling="2025-11-21 21:51:06.626705897 +0000 UTC m=+6271.812890931" lastFinishedPulling="2025-11-21 21:51:10.093100997 +0000 UTC m=+6275.279286051" observedRunningTime="2025-11-21 21:51:10.72867352 +0000 UTC m=+6275.914858604" watchObservedRunningTime="2025-11-21 21:51:10.73699955 +0000 UTC m=+6275.923184604" Nov 21 21:51:13 crc kubenswrapper[4727]: I1121 21:51:13.938563 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:13 crc kubenswrapper[4727]: I1121 21:51:13.939142 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:14 crc kubenswrapper[4727]: I1121 21:51:14.006696 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:24 crc kubenswrapper[4727]: I1121 21:51:24.001722 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:24 crc kubenswrapper[4727]: I1121 21:51:24.065005 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lr7z"] Nov 21 21:51:24 crc kubenswrapper[4727]: I1121 21:51:24.972234 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8lr7z" podUID="7e876045-0098-4341-9a38-481b301f0340" containerName="registry-server" containerID="cri-o://b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4" gracePeriod=2 Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.578225 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.724625 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-catalog-content\") pod \"7e876045-0098-4341-9a38-481b301f0340\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.725033 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-utilities\") pod \"7e876045-0098-4341-9a38-481b301f0340\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.725312 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snzq7\" (UniqueName: \"kubernetes.io/projected/7e876045-0098-4341-9a38-481b301f0340-kube-api-access-snzq7\") pod \"7e876045-0098-4341-9a38-481b301f0340\" (UID: \"7e876045-0098-4341-9a38-481b301f0340\") " Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.726480 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-utilities" (OuterVolumeSpecName: "utilities") pod "7e876045-0098-4341-9a38-481b301f0340" (UID: "7e876045-0098-4341-9a38-481b301f0340"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.737488 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e876045-0098-4341-9a38-481b301f0340-kube-api-access-snzq7" (OuterVolumeSpecName: "kube-api-access-snzq7") pod "7e876045-0098-4341-9a38-481b301f0340" (UID: "7e876045-0098-4341-9a38-481b301f0340"). InnerVolumeSpecName "kube-api-access-snzq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.791786 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e876045-0098-4341-9a38-481b301f0340" (UID: "7e876045-0098-4341-9a38-481b301f0340"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.828925 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snzq7\" (UniqueName: \"kubernetes.io/projected/7e876045-0098-4341-9a38-481b301f0340-kube-api-access-snzq7\") on node \"crc\" DevicePath \"\"" Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.828985 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.829000 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e876045-0098-4341-9a38-481b301f0340-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.992460 4727 generic.go:334] "Generic (PLEG): container finished" podID="7e876045-0098-4341-9a38-481b301f0340" containerID="b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4" exitCode=0 Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.992558 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lr7z" Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.992583 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lr7z" event={"ID":"7e876045-0098-4341-9a38-481b301f0340","Type":"ContainerDied","Data":"b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4"} Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.993154 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lr7z" event={"ID":"7e876045-0098-4341-9a38-481b301f0340","Type":"ContainerDied","Data":"1318b3e17fb50f7773e7b91e7de207274a9bd0c908a7d1a5fabb3fbf06414c03"} Nov 21 21:51:25 crc kubenswrapper[4727]: I1121 21:51:25.993176 4727 scope.go:117] "RemoveContainer" containerID="b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4" Nov 21 21:51:26 crc kubenswrapper[4727]: I1121 21:51:26.050396 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lr7z"] Nov 21 21:51:26 crc kubenswrapper[4727]: I1121 21:51:26.061330 4727 scope.go:117] "RemoveContainer" containerID="5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2" Nov 21 21:51:26 crc kubenswrapper[4727]: I1121 21:51:26.068706 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8lr7z"] Nov 21 21:51:26 crc kubenswrapper[4727]: I1121 21:51:26.096844 4727 scope.go:117] "RemoveContainer" containerID="1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71" Nov 21 21:51:26 crc kubenswrapper[4727]: I1121 21:51:26.158669 4727 scope.go:117] "RemoveContainer" containerID="b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4" Nov 21 21:51:26 crc kubenswrapper[4727]: E1121 21:51:26.159500 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4\": container with ID starting with b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4 not found: ID does not exist" containerID="b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4" Nov 21 21:51:26 crc kubenswrapper[4727]: I1121 21:51:26.159557 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4"} err="failed to get container status \"b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4\": rpc error: code = NotFound desc = could not find container \"b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4\": container with ID starting with b2a84f20c3e412998c02e62efaeae275b6b205c34464ca778ff21e87f98ca6e4 not found: ID does not exist" Nov 21 21:51:26 crc kubenswrapper[4727]: I1121 21:51:26.159588 4727 scope.go:117] "RemoveContainer" containerID="5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2" Nov 21 21:51:26 crc kubenswrapper[4727]: E1121 21:51:26.160547 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2\": container with ID starting with 5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2 not found: ID does not exist" containerID="5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2" Nov 21 21:51:26 crc kubenswrapper[4727]: I1121 21:51:26.160595 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2"} err="failed to get container status \"5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2\": rpc error: code = NotFound desc = could not find container \"5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2\": container with ID starting with 5ae53aec0670934994d577c175b84ffeba48aca91ab6e8262b4761d6c5b676b2 not found: ID does not exist" Nov 21 21:51:26 crc kubenswrapper[4727]: I1121 21:51:26.160626 4727 scope.go:117] "RemoveContainer" containerID="1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71" Nov 21 21:51:26 crc kubenswrapper[4727]: E1121 21:51:26.161191 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71\": container with ID starting with 1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71 not found: ID does not exist" containerID="1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71" Nov 21 21:51:26 crc kubenswrapper[4727]: I1121 21:51:26.161220 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71"} err="failed to get container status \"1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71\": rpc error: code = NotFound desc = could not find container \"1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71\": container with ID starting with 1c38da59eb7ddbe26892dddf3c8513a156c5cfaaacc52938999629ea8ce33e71 not found: ID does not exist" Nov 21 21:51:27 crc kubenswrapper[4727]: I1121 21:51:27.528631 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e876045-0098-4341-9a38-481b301f0340" path="/var/lib/kubelet/pods/7e876045-0098-4341-9a38-481b301f0340/volumes" Nov 21 21:51:33 crc kubenswrapper[4727]: E1121 21:51:33.635699 4727 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:49720->38.102.83.179:43311: write tcp 38.102.83.179:49720->38.102.83.179:43311: write: broken pipe Nov 21 21:52:43 crc kubenswrapper[4727]: I1121 21:52:43.335614 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:52:43 crc kubenswrapper[4727]: I1121 21:52:43.336602 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:53:13 crc kubenswrapper[4727]: I1121 21:53:13.336747 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:53:13 crc kubenswrapper[4727]: I1121 21:53:13.337790 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:53:43 crc kubenswrapper[4727]: I1121 21:53:43.335816 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 21:53:43 crc kubenswrapper[4727]: I1121 21:53:43.336813 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 21:53:43 crc kubenswrapper[4727]: I1121 21:53:43.336886 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 21:53:43 crc kubenswrapper[4727]: I1121 21:53:43.337916 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 21:53:43 crc kubenswrapper[4727]: I1121 21:53:43.338054 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" gracePeriod=600 Nov 21 21:53:43 crc kubenswrapper[4727]: E1121 21:53:43.512125 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:53:44 crc kubenswrapper[4727]: I1121 21:53:44.381469 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" exitCode=0 Nov 21 21:53:44 crc kubenswrapper[4727]: I1121 21:53:44.381565 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83"} Nov 21 21:53:44 crc kubenswrapper[4727]: I1121 21:53:44.381993 4727 scope.go:117] "RemoveContainer" containerID="fe6d2511e1d89189edbe7d9b94e069ef4261c4388aba56dcc414a3c022fd4190" Nov 21 21:53:44 crc kubenswrapper[4727]: I1121 21:53:44.383609 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:53:44 crc kubenswrapper[4727]: E1121 21:53:44.384244 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:53:58 crc kubenswrapper[4727]: I1121 21:53:58.500045 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:53:58 crc kubenswrapper[4727]: E1121 21:53:58.500941 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.685403 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c4tcl"] Nov 21 21:54:09 crc kubenswrapper[4727]: E1121 21:54:09.687241 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e876045-0098-4341-9a38-481b301f0340" containerName="registry-server" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.687268 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e876045-0098-4341-9a38-481b301f0340" containerName="registry-server" Nov 21 21:54:09 crc kubenswrapper[4727]: E1121 21:54:09.687355 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e876045-0098-4341-9a38-481b301f0340" containerName="extract-content" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.687369 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e876045-0098-4341-9a38-481b301f0340" containerName="extract-content" Nov 21 21:54:09 crc kubenswrapper[4727]: E1121 21:54:09.687432 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e876045-0098-4341-9a38-481b301f0340" containerName="extract-utilities" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.687445 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e876045-0098-4341-9a38-481b301f0340" containerName="extract-utilities" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.687929 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e876045-0098-4341-9a38-481b301f0340" containerName="registry-server" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.691321 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.714688 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4tcl"] Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.869251 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-catalog-content\") pod \"community-operators-c4tcl\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.869373 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sqxc\" (UniqueName: \"kubernetes.io/projected/5c231156-c29e-4661-a446-fb4bdf4ed652-kube-api-access-2sqxc\") pod \"community-operators-c4tcl\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.869433 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-utilities\") pod \"community-operators-c4tcl\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.973375 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-catalog-content\") pod \"community-operators-c4tcl\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.973522 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sqxc\" (UniqueName: \"kubernetes.io/projected/5c231156-c29e-4661-a446-fb4bdf4ed652-kube-api-access-2sqxc\") pod \"community-operators-c4tcl\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.973634 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-utilities\") pod \"community-operators-c4tcl\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.974648 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-utilities\") pod \"community-operators-c4tcl\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.975188 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-catalog-content\") pod \"community-operators-c4tcl\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:09 crc kubenswrapper[4727]: I1121 21:54:09.998696 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sqxc\" (UniqueName: \"kubernetes.io/projected/5c231156-c29e-4661-a446-fb4bdf4ed652-kube-api-access-2sqxc\") pod \"community-operators-c4tcl\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:10 crc kubenswrapper[4727]: I1121 21:54:10.032999 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:10 crc kubenswrapper[4727]: I1121 21:54:10.595558 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4tcl"] Nov 21 21:54:10 crc kubenswrapper[4727]: I1121 21:54:10.808907 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4tcl" event={"ID":"5c231156-c29e-4661-a446-fb4bdf4ed652","Type":"ContainerStarted","Data":"8a22c41a7e809a69dd3be6f542fca768489e6376050e435c4989d5f8e89a2e27"} Nov 21 21:54:11 crc kubenswrapper[4727]: I1121 21:54:11.830266 4727 generic.go:334] "Generic (PLEG): container finished" podID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerID="c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e" exitCode=0 Nov 21 21:54:11 crc kubenswrapper[4727]: I1121 21:54:11.830407 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4tcl" event={"ID":"5c231156-c29e-4661-a446-fb4bdf4ed652","Type":"ContainerDied","Data":"c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e"} Nov 21 21:54:11 crc kubenswrapper[4727]: I1121 21:54:11.834618 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 21:54:12 crc kubenswrapper[4727]: I1121 21:54:12.499810 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:54:12 crc kubenswrapper[4727]: E1121 21:54:12.500933 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:54:12 crc kubenswrapper[4727]: I1121 21:54:12.864794 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4tcl" event={"ID":"5c231156-c29e-4661-a446-fb4bdf4ed652","Type":"ContainerStarted","Data":"35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a"} Nov 21 21:54:14 crc kubenswrapper[4727]: I1121 21:54:14.903780 4727 generic.go:334] "Generic (PLEG): container finished" podID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerID="35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a" exitCode=0 Nov 21 21:54:14 crc kubenswrapper[4727]: I1121 21:54:14.903860 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4tcl" event={"ID":"5c231156-c29e-4661-a446-fb4bdf4ed652","Type":"ContainerDied","Data":"35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a"} Nov 21 21:54:15 crc kubenswrapper[4727]: I1121 21:54:15.928438 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4tcl" event={"ID":"5c231156-c29e-4661-a446-fb4bdf4ed652","Type":"ContainerStarted","Data":"12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3"} Nov 21 21:54:15 crc kubenswrapper[4727]: I1121 21:54:15.960407 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c4tcl" podStartSLOduration=3.3349093180000002 podStartE2EDuration="6.960381085s" podCreationTimestamp="2025-11-21 21:54:09 +0000 UTC" firstStartedPulling="2025-11-21 21:54:11.834159731 +0000 UTC m=+6457.020344815" lastFinishedPulling="2025-11-21 21:54:15.459631498 +0000 UTC m=+6460.645816582" observedRunningTime="2025-11-21 21:54:15.952711991 +0000 UTC m=+6461.138897045" watchObservedRunningTime="2025-11-21 21:54:15.960381085 +0000 UTC m=+6461.146566129" Nov 21 21:54:20 crc kubenswrapper[4727]: I1121 21:54:20.034018 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:20 crc kubenswrapper[4727]: I1121 21:54:20.034954 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:21 crc kubenswrapper[4727]: I1121 21:54:21.104172 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c4tcl" podUID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerName="registry-server" probeResult="failure" output=< Nov 21 21:54:21 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 21:54:21 crc kubenswrapper[4727]: > Nov 21 21:54:26 crc kubenswrapper[4727]: I1121 21:54:26.500316 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:54:26 crc kubenswrapper[4727]: E1121 21:54:26.501880 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:54:30 crc kubenswrapper[4727]: I1121 21:54:30.111381 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:30 crc kubenswrapper[4727]: I1121 21:54:30.190495 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:30 crc kubenswrapper[4727]: I1121 21:54:30.378367 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4tcl"] Nov 21 21:54:31 crc kubenswrapper[4727]: I1121 21:54:31.176860 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c4tcl" podUID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerName="registry-server" containerID="cri-o://12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3" gracePeriod=2 Nov 21 21:54:31 crc kubenswrapper[4727]: I1121 21:54:31.838278 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:31 crc kubenswrapper[4727]: I1121 21:54:31.971769 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sqxc\" (UniqueName: \"kubernetes.io/projected/5c231156-c29e-4661-a446-fb4bdf4ed652-kube-api-access-2sqxc\") pod \"5c231156-c29e-4661-a446-fb4bdf4ed652\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " Nov 21 21:54:31 crc kubenswrapper[4727]: I1121 21:54:31.972266 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-catalog-content\") pod \"5c231156-c29e-4661-a446-fb4bdf4ed652\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " Nov 21 21:54:31 crc kubenswrapper[4727]: I1121 21:54:31.972341 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-utilities\") pod \"5c231156-c29e-4661-a446-fb4bdf4ed652\" (UID: \"5c231156-c29e-4661-a446-fb4bdf4ed652\") " Nov 21 21:54:31 crc kubenswrapper[4727]: I1121 21:54:31.973872 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-utilities" (OuterVolumeSpecName: "utilities") pod "5c231156-c29e-4661-a446-fb4bdf4ed652" (UID: "5c231156-c29e-4661-a446-fb4bdf4ed652"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:54:31 crc kubenswrapper[4727]: I1121 21:54:31.980290 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c231156-c29e-4661-a446-fb4bdf4ed652-kube-api-access-2sqxc" (OuterVolumeSpecName: "kube-api-access-2sqxc") pod "5c231156-c29e-4661-a446-fb4bdf4ed652" (UID: "5c231156-c29e-4661-a446-fb4bdf4ed652"). InnerVolumeSpecName "kube-api-access-2sqxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.034605 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c231156-c29e-4661-a446-fb4bdf4ed652" (UID: "5c231156-c29e-4661-a446-fb4bdf4ed652"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.075294 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.075323 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c231156-c29e-4661-a446-fb4bdf4ed652-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.075334 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sqxc\" (UniqueName: \"kubernetes.io/projected/5c231156-c29e-4661-a446-fb4bdf4ed652-kube-api-access-2sqxc\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.194789 4727 generic.go:334] "Generic (PLEG): container finished" podID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerID="12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3" exitCode=0 Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.194860 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4tcl" event={"ID":"5c231156-c29e-4661-a446-fb4bdf4ed652","Type":"ContainerDied","Data":"12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3"} Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.194889 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4tcl" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.194905 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4tcl" event={"ID":"5c231156-c29e-4661-a446-fb4bdf4ed652","Type":"ContainerDied","Data":"8a22c41a7e809a69dd3be6f542fca768489e6376050e435c4989d5f8e89a2e27"} Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.194938 4727 scope.go:117] "RemoveContainer" containerID="12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.240229 4727 scope.go:117] "RemoveContainer" containerID="35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.255864 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4tcl"] Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.274901 4727 scope.go:117] "RemoveContainer" containerID="c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.279576 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c4tcl"] Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.360889 4727 scope.go:117] "RemoveContainer" containerID="12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3" Nov 21 21:54:32 crc kubenswrapper[4727]: E1121 21:54:32.361857 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3\": container with ID starting with 12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3 not found: ID does not exist" containerID="12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.361943 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3"} err="failed to get container status \"12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3\": rpc error: code = NotFound desc = could not find container \"12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3\": container with ID starting with 12b579ae202fcbfd212ff4e9a9a1dc4273c26408dc8d7f30f106531d7542e3c3 not found: ID does not exist" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.362018 4727 scope.go:117] "RemoveContainer" containerID="35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a" Nov 21 21:54:32 crc kubenswrapper[4727]: E1121 21:54:32.362870 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a\": container with ID starting with 35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a not found: ID does not exist" containerID="35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.362938 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a"} err="failed to get container status \"35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a\": rpc error: code = NotFound desc = could not find container \"35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a\": container with ID starting with 35e134b44ba912a898cb319c299981c8bb524d7ff05f0dbc45340f5c7f86d53a not found: ID does not exist" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.363029 4727 scope.go:117] "RemoveContainer" containerID="c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e" Nov 21 21:54:32 crc kubenswrapper[4727]: E1121 21:54:32.363478 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e\": container with ID starting with c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e not found: ID does not exist" containerID="c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e" Nov 21 21:54:32 crc kubenswrapper[4727]: I1121 21:54:32.363549 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e"} err="failed to get container status \"c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e\": rpc error: code = NotFound desc = could not find container \"c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e\": container with ID starting with c6c27754d4019136338c099467c7a816b469a1c26ec83c25a055102f6e89631e not found: ID does not exist" Nov 21 21:54:33 crc kubenswrapper[4727]: I1121 21:54:33.514704 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c231156-c29e-4661-a446-fb4bdf4ed652" path="/var/lib/kubelet/pods/5c231156-c29e-4661-a446-fb4bdf4ed652/volumes" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.640825 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-n2gcs"] Nov 21 21:54:34 crc kubenswrapper[4727]: E1121 21:54:34.641906 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerName="extract-content" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.641924 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerName="extract-content" Nov 21 21:54:34 crc kubenswrapper[4727]: E1121 21:54:34.641949 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerName="extract-utilities" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.641974 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerName="extract-utilities" Nov 21 21:54:34 crc kubenswrapper[4727]: E1121 21:54:34.642042 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerName="registry-server" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.642051 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerName="registry-server" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.642356 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c231156-c29e-4661-a446-fb4bdf4ed652" containerName="registry-server" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.644788 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.657643 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-combined-ca-bundle\") pod \"heat-db-sync-n2gcs\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.658039 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdm6h\" (UniqueName: \"kubernetes.io/projected/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-kube-api-access-vdm6h\") pod \"heat-db-sync-n2gcs\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.658166 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-config-data\") pod \"heat-db-sync-n2gcs\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.669363 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-n2gcs"] Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.760901 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-combined-ca-bundle\") pod \"heat-db-sync-n2gcs\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.761157 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdm6h\" (UniqueName: \"kubernetes.io/projected/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-kube-api-access-vdm6h\") pod \"heat-db-sync-n2gcs\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.761222 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-config-data\") pod \"heat-db-sync-n2gcs\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.769471 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-config-data\") pod \"heat-db-sync-n2gcs\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.769718 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-combined-ca-bundle\") pod \"heat-db-sync-n2gcs\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.782449 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdm6h\" (UniqueName: \"kubernetes.io/projected/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-kube-api-access-vdm6h\") pod \"heat-db-sync-n2gcs\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:34 crc kubenswrapper[4727]: I1121 21:54:34.970733 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n2gcs" Nov 21 21:54:35 crc kubenswrapper[4727]: I1121 21:54:35.538229 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-n2gcs"] Nov 21 21:54:36 crc kubenswrapper[4727]: I1121 21:54:36.294094 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n2gcs" event={"ID":"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e","Type":"ContainerStarted","Data":"898e545627da171ad98a501d4ae0dcb8a5d5744c3e1f848ab4464d6ea1656b6c"} Nov 21 21:54:37 crc kubenswrapper[4727]: I1121 21:54:37.023555 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 21:54:37 crc kubenswrapper[4727]: I1121 21:54:37.023918 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="ceilometer-central-agent" containerID="cri-o://34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86" gracePeriod=30 Nov 21 21:54:37 crc kubenswrapper[4727]: I1121 21:54:37.024009 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="ceilometer-notification-agent" containerID="cri-o://ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c" gracePeriod=30 Nov 21 21:54:37 crc kubenswrapper[4727]: I1121 21:54:37.024028 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="sg-core" containerID="cri-o://623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd" gracePeriod=30 Nov 21 21:54:37 crc kubenswrapper[4727]: I1121 21:54:37.024090 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="proxy-httpd" containerID="cri-o://251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf" gracePeriod=30 Nov 21 21:54:37 crc kubenswrapper[4727]: I1121 21:54:37.313271 4727 generic.go:334] "Generic (PLEG): container finished" podID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerID="623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd" exitCode=2 Nov 21 21:54:37 crc kubenswrapper[4727]: I1121 21:54:37.313459 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066","Type":"ContainerDied","Data":"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd"} Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.267869 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.329162 4727 generic.go:334] "Generic (PLEG): container finished" podID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerID="251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf" exitCode=0 Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.329195 4727 generic.go:334] "Generic (PLEG): container finished" podID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerID="ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c" exitCode=0 Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.329203 4727 generic.go:334] "Generic (PLEG): container finished" podID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerID="34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86" exitCode=0 Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.329230 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066","Type":"ContainerDied","Data":"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf"} Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.329268 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066","Type":"ContainerDied","Data":"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c"} Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.329277 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066","Type":"ContainerDied","Data":"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86"} Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.329287 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066","Type":"ContainerDied","Data":"170f3afcf1490ec08e69672d63ec227623a487b28a0a9ba60e27aa20a063b577"} Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.329305 4727 scope.go:117] "RemoveContainer" containerID="623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.329529 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.369384 4727 scope.go:117] "RemoveContainer" containerID="251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.374118 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-scripts\") pod \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.374173 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-ceilometer-tls-certs\") pod \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.374202 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-sg-core-conf-yaml\") pod \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.374365 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-combined-ca-bundle\") pod \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.374480 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-run-httpd\") pod \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.374731 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjljj\" (UniqueName: \"kubernetes.io/projected/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-kube-api-access-vjljj\") pod \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.374796 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-config-data\") pod \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.374844 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-log-httpd\") pod \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\" (UID: \"a92f8b6a-9ebe-4c1e-b63e-ad94ef056066\") " Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.378020 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" (UID: "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.378364 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" (UID: "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.383374 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-scripts" (OuterVolumeSpecName: "scripts") pod "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" (UID: "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.386140 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-kube-api-access-vjljj" (OuterVolumeSpecName: "kube-api-access-vjljj") pod "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" (UID: "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066"). InnerVolumeSpecName "kube-api-access-vjljj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.419275 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" (UID: "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.445212 4727 scope.go:117] "RemoveContainer" containerID="ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.474463 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" (UID: "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.478866 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjljj\" (UniqueName: \"kubernetes.io/projected/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-kube-api-access-vjljj\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.478930 4727 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.478941 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.478996 4727 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.479006 4727 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.479016 4727 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.491264 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" (UID: "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.539495 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-config-data" (OuterVolumeSpecName: "config-data") pod "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" (UID: "a92f8b6a-9ebe-4c1e-b63e-ad94ef056066"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.581907 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.581950 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.591803 4727 scope.go:117] "RemoveContainer" containerID="34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.634070 4727 scope.go:117] "RemoveContainer" containerID="623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd" Nov 21 21:54:38 crc kubenswrapper[4727]: E1121 21:54:38.636034 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd\": container with ID starting with 623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd not found: ID does not exist" containerID="623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.636071 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd"} err="failed to get container status \"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd\": rpc error: code = NotFound desc = could not find container \"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd\": container with ID starting with 623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.636103 4727 scope.go:117] "RemoveContainer" containerID="251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf" Nov 21 21:54:38 crc kubenswrapper[4727]: E1121 21:54:38.637611 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf\": container with ID starting with 251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf not found: ID does not exist" containerID="251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.637666 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf"} err="failed to get container status \"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf\": rpc error: code = NotFound desc = could not find container \"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf\": container with ID starting with 251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.637696 4727 scope.go:117] "RemoveContainer" containerID="ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c" Nov 21 21:54:38 crc kubenswrapper[4727]: E1121 21:54:38.638588 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c\": container with ID starting with ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c not found: ID does not exist" containerID="ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.638617 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c"} err="failed to get container status \"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c\": rpc error: code = NotFound desc = could not find container \"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c\": container with ID starting with ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.638632 4727 scope.go:117] "RemoveContainer" containerID="34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86" Nov 21 21:54:38 crc kubenswrapper[4727]: E1121 21:54:38.639025 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86\": container with ID starting with 34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86 not found: ID does not exist" containerID="34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.639048 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86"} err="failed to get container status \"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86\": rpc error: code = NotFound desc = could not find container \"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86\": container with ID starting with 34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86 not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.639064 4727 scope.go:117] "RemoveContainer" containerID="623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.639677 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd"} err="failed to get container status \"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd\": rpc error: code = NotFound desc = could not find container \"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd\": container with ID starting with 623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.639703 4727 scope.go:117] "RemoveContainer" containerID="251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.639918 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf"} err="failed to get container status \"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf\": rpc error: code = NotFound desc = could not find container \"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf\": container with ID starting with 251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.639936 4727 scope.go:117] "RemoveContainer" containerID="ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.640377 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c"} err="failed to get container status \"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c\": rpc error: code = NotFound desc = could not find container \"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c\": container with ID starting with ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.640430 4727 scope.go:117] "RemoveContainer" containerID="34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.641310 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86"} err="failed to get container status \"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86\": rpc error: code = NotFound desc = could not find container \"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86\": container with ID starting with 34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86 not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.641339 4727 scope.go:117] "RemoveContainer" containerID="623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.641638 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd"} err="failed to get container status \"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd\": rpc error: code = NotFound desc = could not find container \"623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd\": container with ID starting with 623b6c1c3db29076c2af847c60d3f31159b97646027ca53a85aa214fdb93e9bd not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.641659 4727 scope.go:117] "RemoveContainer" containerID="251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.642106 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf"} err="failed to get container status \"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf\": rpc error: code = NotFound desc = could not find container \"251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf\": container with ID starting with 251e76d1bc2a9664c28f2b51d051181e2a801f49a2295d681b1f165f489691bf not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.642131 4727 scope.go:117] "RemoveContainer" containerID="ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.642744 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c"} err="failed to get container status \"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c\": rpc error: code = NotFound desc = could not find container \"ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c\": container with ID starting with ad6e2845894e023261d606765ae67f8256a1293ae3572c818c9f7e65fa4b918c not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.642771 4727 scope.go:117] "RemoveContainer" containerID="34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.644533 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86"} err="failed to get container status \"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86\": rpc error: code = NotFound desc = could not find container \"34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86\": container with ID starting with 34e183ac2231b3f9af169cb9a5783cd233e17fff3ff60aed1b6ad88ca8e30f86 not found: ID does not exist" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.681328 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.698136 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.713827 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 21:54:38 crc kubenswrapper[4727]: E1121 21:54:38.714780 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="ceilometer-central-agent" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.714801 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="ceilometer-central-agent" Nov 21 21:54:38 crc kubenswrapper[4727]: E1121 21:54:38.714817 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="ceilometer-notification-agent" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.714846 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="ceilometer-notification-agent" Nov 21 21:54:38 crc kubenswrapper[4727]: E1121 21:54:38.714877 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="sg-core" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.714903 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="sg-core" Nov 21 21:54:38 crc kubenswrapper[4727]: E1121 21:54:38.714920 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="proxy-httpd" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.714928 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="proxy-httpd" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.715304 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="sg-core" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.715335 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="ceilometer-central-agent" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.715347 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="proxy-httpd" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.715384 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" containerName="ceilometer-notification-agent" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.720056 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.722652 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.722838 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.722867 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.726880 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.888378 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-scripts\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.888429 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x4cz\" (UniqueName: \"kubernetes.io/projected/038d5bff-73d1-4dd6-a5ca-6501e07617a1-kube-api-access-8x4cz\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.888824 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/038d5bff-73d1-4dd6-a5ca-6501e07617a1-log-httpd\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.888906 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.888932 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.888991 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/038d5bff-73d1-4dd6-a5ca-6501e07617a1-run-httpd\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.889076 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-config-data\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.889256 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.991813 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-scripts\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.991879 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x4cz\" (UniqueName: \"kubernetes.io/projected/038d5bff-73d1-4dd6-a5ca-6501e07617a1-kube-api-access-8x4cz\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.991936 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/038d5bff-73d1-4dd6-a5ca-6501e07617a1-log-httpd\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.992023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.992047 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.992086 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/038d5bff-73d1-4dd6-a5ca-6501e07617a1-run-httpd\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.992149 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-config-data\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.992212 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.993136 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/038d5bff-73d1-4dd6-a5ca-6501e07617a1-log-httpd\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.993136 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/038d5bff-73d1-4dd6-a5ca-6501e07617a1-run-httpd\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.997039 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.997557 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-scripts\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:38 crc kubenswrapper[4727]: I1121 21:54:38.999611 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:39 crc kubenswrapper[4727]: I1121 21:54:39.004145 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-config-data\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:39 crc kubenswrapper[4727]: I1121 21:54:39.004863 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038d5bff-73d1-4dd6-a5ca-6501e07617a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:39 crc kubenswrapper[4727]: I1121 21:54:39.015214 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x4cz\" (UniqueName: \"kubernetes.io/projected/038d5bff-73d1-4dd6-a5ca-6501e07617a1-kube-api-access-8x4cz\") pod \"ceilometer-0\" (UID: \"038d5bff-73d1-4dd6-a5ca-6501e07617a1\") " pod="openstack/ceilometer-0" Nov 21 21:54:39 crc kubenswrapper[4727]: I1121 21:54:39.047638 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 21:54:39 crc kubenswrapper[4727]: I1121 21:54:39.523702 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92f8b6a-9ebe-4c1e-b63e-ad94ef056066" path="/var/lib/kubelet/pods/a92f8b6a-9ebe-4c1e-b63e-ad94ef056066/volumes" Nov 21 21:54:39 crc kubenswrapper[4727]: I1121 21:54:39.574440 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 21:54:39 crc kubenswrapper[4727]: W1121 21:54:39.586228 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod038d5bff_73d1_4dd6_a5ca_6501e07617a1.slice/crio-a01160c004c6a234f3112b126f3880f801441f39af097cd9a4f7a5f439571ef0 WatchSource:0}: Error finding container a01160c004c6a234f3112b126f3880f801441f39af097cd9a4f7a5f439571ef0: Status 404 returned error can't find the container with id a01160c004c6a234f3112b126f3880f801441f39af097cd9a4f7a5f439571ef0 Nov 21 21:54:40 crc kubenswrapper[4727]: I1121 21:54:40.409772 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"038d5bff-73d1-4dd6-a5ca-6501e07617a1","Type":"ContainerStarted","Data":"a01160c004c6a234f3112b126f3880f801441f39af097cd9a4f7a5f439571ef0"} Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.006229 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.008690 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.013212 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.013482 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vbccx" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.013532 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.013653 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.018105 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.056607 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.056662 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.056760 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.056785 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.056835 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.056938 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.056997 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.057037 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92rg5\" (UniqueName: \"kubernetes.io/projected/9523e234-9365-483e-8548-43a9c312692e-kube-api-access-92rg5\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.057066 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.158760 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.158836 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.158896 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92rg5\" (UniqueName: \"kubernetes.io/projected/9523e234-9365-483e-8548-43a9c312692e-kube-api-access-92rg5\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.158927 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.158987 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.159011 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.159090 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.159113 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.159173 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.160649 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.161127 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.161914 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-config-data\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.163203 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.163564 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.167920 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.173470 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.174780 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.183707 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92rg5\" (UniqueName: \"kubernetes.io/projected/9523e234-9365-483e-8548-43a9c312692e-kube-api-access-92rg5\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.196223 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.343067 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.500319 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:54:41 crc kubenswrapper[4727]: E1121 21:54:41.500584 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:54:41 crc kubenswrapper[4727]: I1121 21:54:41.833806 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 21 21:54:41 crc kubenswrapper[4727]: W1121 21:54:41.840280 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9523e234_9365_483e_8548_43a9c312692e.slice/crio-a36cef4edf466319cf2c52f9eec405f5ec742711ef8f0f9ae9b168184c8d2d3f WatchSource:0}: Error finding container a36cef4edf466319cf2c52f9eec405f5ec742711ef8f0f9ae9b168184c8d2d3f: Status 404 returned error can't find the container with id a36cef4edf466319cf2c52f9eec405f5ec742711ef8f0f9ae9b168184c8d2d3f Nov 21 21:54:42 crc kubenswrapper[4727]: I1121 21:54:42.442771 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9523e234-9365-483e-8548-43a9c312692e","Type":"ContainerStarted","Data":"a36cef4edf466319cf2c52f9eec405f5ec742711ef8f0f9ae9b168184c8d2d3f"} Nov 21 21:54:52 crc kubenswrapper[4727]: I1121 21:54:52.505417 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:54:52 crc kubenswrapper[4727]: E1121 21:54:52.506783 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:54:58 crc kubenswrapper[4727]: E1121 21:54:58.737874 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 21 21:54:58 crc kubenswrapper[4727]: E1121 21:54:58.738771 4727 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 21 21:54:58 crc kubenswrapper[4727]: E1121 21:54:58.740840 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdm6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-n2gcs_openstack(37b384df-d8e9-4cb3-8d32-a2e1f4672d9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 21:54:58 crc kubenswrapper[4727]: E1121 21:54:58.742554 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-n2gcs" podUID="37b384df-d8e9-4cb3-8d32-a2e1f4672d9e" Nov 21 21:54:59 crc kubenswrapper[4727]: I1121 21:54:59.696755 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"038d5bff-73d1-4dd6-a5ca-6501e07617a1","Type":"ContainerStarted","Data":"972a9055209f4c647ee3c320c03484bbb28a992e33b1f966baa8803153f6b908"} Nov 21 21:54:59 crc kubenswrapper[4727]: E1121 21:54:59.701533 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-n2gcs" podUID="37b384df-d8e9-4cb3-8d32-a2e1f4672d9e" Nov 21 21:55:00 crc kubenswrapper[4727]: I1121 21:55:00.713888 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"038d5bff-73d1-4dd6-a5ca-6501e07617a1","Type":"ContainerStarted","Data":"35b9190c779bbd564fe42a876eeceb3eadbcc9000588ce7ce8bfe8d76e1570b8"} Nov 21 21:55:06 crc kubenswrapper[4727]: I1121 21:55:06.502176 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:55:06 crc kubenswrapper[4727]: E1121 21:55:06.503759 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:55:19 crc kubenswrapper[4727]: I1121 21:55:19.500676 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:55:19 crc kubenswrapper[4727]: E1121 21:55:19.502248 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:55:28 crc kubenswrapper[4727]: E1121 21:55:28.405410 4727 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 21 21:55:28 crc kubenswrapper[4727]: E1121 21:55:28.407178 4727 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92rg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9523e234-9365-483e-8548-43a9c312692e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 21:55:28 crc kubenswrapper[4727]: E1121 21:55:28.408322 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9523e234-9365-483e-8548-43a9c312692e" Nov 21 21:55:29 crc kubenswrapper[4727]: I1121 21:55:29.157104 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n2gcs" event={"ID":"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e","Type":"ContainerStarted","Data":"f3c76fd0e1b7f7705fbfdba091891faffe160684f6d5835abc798180181ba511"} Nov 21 21:55:29 crc kubenswrapper[4727]: I1121 21:55:29.161622 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"038d5bff-73d1-4dd6-a5ca-6501e07617a1","Type":"ContainerStarted","Data":"137a6487f34e6b0bbe4a8afa12db8c3453f45e2aa9d2bc0a200679770344e5ba"} Nov 21 21:55:29 crc kubenswrapper[4727]: E1121 21:55:29.164071 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9523e234-9365-483e-8548-43a9c312692e" Nov 21 21:55:29 crc kubenswrapper[4727]: I1121 21:55:29.174214 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-n2gcs" podStartSLOduration=2.299125754 podStartE2EDuration="55.174194549s" podCreationTimestamp="2025-11-21 21:54:34 +0000 UTC" firstStartedPulling="2025-11-21 21:54:35.546159463 +0000 UTC m=+6480.732344517" lastFinishedPulling="2025-11-21 21:55:28.421228268 +0000 UTC m=+6533.607413312" observedRunningTime="2025-11-21 21:55:29.171615147 +0000 UTC m=+6534.357800191" watchObservedRunningTime="2025-11-21 21:55:29.174194549 +0000 UTC m=+6534.360379593" Nov 21 21:55:31 crc kubenswrapper[4727]: I1121 21:55:31.364690 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"038d5bff-73d1-4dd6-a5ca-6501e07617a1","Type":"ContainerStarted","Data":"c2eead5ca5431296f62a9da3d8470ae695edc048066dced38821f616eaea6571"} Nov 21 21:55:31 crc kubenswrapper[4727]: I1121 21:55:31.365955 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 21:55:31 crc kubenswrapper[4727]: I1121 21:55:31.412519 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.66048127 podStartE2EDuration="53.412482195s" podCreationTimestamp="2025-11-21 21:54:38 +0000 UTC" firstStartedPulling="2025-11-21 21:54:39.589485457 +0000 UTC m=+6484.775670501" lastFinishedPulling="2025-11-21 21:55:30.341486362 +0000 UTC m=+6535.527671426" observedRunningTime="2025-11-21 21:55:31.392574827 +0000 UTC m=+6536.578759901" watchObservedRunningTime="2025-11-21 21:55:31.412482195 +0000 UTC m=+6536.598667309" Nov 21 21:55:33 crc kubenswrapper[4727]: I1121 21:55:33.394328 4727 generic.go:334] "Generic (PLEG): container finished" podID="37b384df-d8e9-4cb3-8d32-a2e1f4672d9e" containerID="f3c76fd0e1b7f7705fbfdba091891faffe160684f6d5835abc798180181ba511" exitCode=0 Nov 21 21:55:33 crc kubenswrapper[4727]: I1121 21:55:33.394573 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n2gcs" event={"ID":"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e","Type":"ContainerDied","Data":"f3c76fd0e1b7f7705fbfdba091891faffe160684f6d5835abc798180181ba511"} Nov 21 21:55:33 crc kubenswrapper[4727]: I1121 21:55:33.501139 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:55:33 crc kubenswrapper[4727]: E1121 21:55:33.501749 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.020579 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n2gcs" Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.063123 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-combined-ca-bundle\") pod \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.063205 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdm6h\" (UniqueName: \"kubernetes.io/projected/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-kube-api-access-vdm6h\") pod \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.063285 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-config-data\") pod \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\" (UID: \"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e\") " Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.072860 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-kube-api-access-vdm6h" (OuterVolumeSpecName: "kube-api-access-vdm6h") pod "37b384df-d8e9-4cb3-8d32-a2e1f4672d9e" (UID: "37b384df-d8e9-4cb3-8d32-a2e1f4672d9e"). InnerVolumeSpecName "kube-api-access-vdm6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.120296 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37b384df-d8e9-4cb3-8d32-a2e1f4672d9e" (UID: "37b384df-d8e9-4cb3-8d32-a2e1f4672d9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.167659 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.167713 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdm6h\" (UniqueName: \"kubernetes.io/projected/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-kube-api-access-vdm6h\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.189888 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-config-data" (OuterVolumeSpecName: "config-data") pod "37b384df-d8e9-4cb3-8d32-a2e1f4672d9e" (UID: "37b384df-d8e9-4cb3-8d32-a2e1f4672d9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.270081 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.438776 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-n2gcs" event={"ID":"37b384df-d8e9-4cb3-8d32-a2e1f4672d9e","Type":"ContainerDied","Data":"898e545627da171ad98a501d4ae0dcb8a5d5744c3e1f848ab4464d6ea1656b6c"} Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.438855 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="898e545627da171ad98a501d4ae0dcb8a5d5744c3e1f848ab4464d6ea1656b6c" Nov 21 21:55:35 crc kubenswrapper[4727]: I1121 21:55:35.439055 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-n2gcs" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.839651 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-58f8c4fcbf-cqdrx"] Nov 21 21:55:36 crc kubenswrapper[4727]: E1121 21:55:36.840862 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b384df-d8e9-4cb3-8d32-a2e1f4672d9e" containerName="heat-db-sync" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.840881 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b384df-d8e9-4cb3-8d32-a2e1f4672d9e" containerName="heat-db-sync" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.841175 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b384df-d8e9-4cb3-8d32-a2e1f4672d9e" containerName="heat-db-sync" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.842192 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.851159 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-77cbcd7b74-wdstp"] Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.852945 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.928090 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7tk7\" (UniqueName: \"kubernetes.io/projected/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-kube-api-access-h7tk7\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.928726 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-config-data\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.928854 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-config-data-custom\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.929008 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-combined-ca-bundle\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.929095 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-config-data-custom\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.929278 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-config-data\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.929398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk8s4\" (UniqueName: \"kubernetes.io/projected/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-kube-api-access-gk8s4\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.929472 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-combined-ca-bundle\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.929570 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-public-tls-certs\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.929654 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-internal-tls-certs\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.953033 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77cbcd7b74-wdstp"] Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.978675 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5dc877fb6-76mc5"] Nov 21 21:55:36 crc kubenswrapper[4727]: I1121 21:55:36.980472 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.033555 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-config-data\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.033928 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-internal-tls-certs\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.034069 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk8s4\" (UniqueName: \"kubernetes.io/projected/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-kube-api-access-gk8s4\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.034263 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-combined-ca-bundle\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.034434 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-public-tls-certs\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.034555 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-internal-tls-certs\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.034662 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-public-tls-certs\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.034770 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7tk7\" (UniqueName: \"kubernetes.io/projected/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-kube-api-access-h7tk7\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.034859 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-config-data\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.035062 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-config-data-custom\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.035180 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5trdw\" (UniqueName: \"kubernetes.io/projected/67535f0b-486b-4aa4-973e-32c3dd01d514-kube-api-access-5trdw\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.035339 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-config-data\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.035536 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-combined-ca-bundle\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.035643 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-combined-ca-bundle\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.035744 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-config-data-custom\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.035835 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-config-data-custom\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.042038 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5dc877fb6-76mc5"] Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.047657 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-combined-ca-bundle\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.072103 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-config-data-custom\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.092117 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-internal-tls-certs\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.092124 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-config-data\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.094389 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-config-data\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.094562 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-config-data-custom\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.095878 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-combined-ca-bundle\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.096641 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-public-tls-certs\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.121841 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7tk7\" (UniqueName: \"kubernetes.io/projected/df9499f8-a2a6-4d5c-89f7-6119cc985cc2-kube-api-access-h7tk7\") pod \"heat-engine-58f8c4fcbf-cqdrx\" (UID: \"df9499f8-a2a6-4d5c-89f7-6119cc985cc2\") " pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.139350 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-combined-ca-bundle\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.139404 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-config-data-custom\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.139515 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-internal-tls-certs\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.139575 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-public-tls-certs\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.139635 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5trdw\" (UniqueName: \"kubernetes.io/projected/67535f0b-486b-4aa4-973e-32c3dd01d514-kube-api-access-5trdw\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.139676 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-config-data\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.160505 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-combined-ca-bundle\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.160592 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-config-data\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.161648 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-public-tls-certs\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.167881 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-internal-tls-certs\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.168702 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.171184 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk8s4\" (UniqueName: \"kubernetes.io/projected/3923d481-b8b8-4b23-b8eb-c3c1589cdaf5-kube-api-access-gk8s4\") pod \"heat-api-77cbcd7b74-wdstp\" (UID: \"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5\") " pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.181711 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5trdw\" (UniqueName: \"kubernetes.io/projected/67535f0b-486b-4aa4-973e-32c3dd01d514-kube-api-access-5trdw\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.211815 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.226355 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67535f0b-486b-4aa4-973e-32c3dd01d514-config-data-custom\") pod \"heat-cfnapi-5dc877fb6-76mc5\" (UID: \"67535f0b-486b-4aa4-973e-32c3dd01d514\") " pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.226625 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-58f8c4fcbf-cqdrx"] Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.408829 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.806351 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-58f8c4fcbf-cqdrx"] Nov 21 21:55:37 crc kubenswrapper[4727]: W1121 21:55:37.819942 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3923d481_b8b8_4b23_b8eb_c3c1589cdaf5.slice/crio-37f7393c968048947b1c2d9f8bbc8e773938fee3cafdc5afef60d63f399f35e4 WatchSource:0}: Error finding container 37f7393c968048947b1c2d9f8bbc8e773938fee3cafdc5afef60d63f399f35e4: Status 404 returned error can't find the container with id 37f7393c968048947b1c2d9f8bbc8e773938fee3cafdc5afef60d63f399f35e4 Nov 21 21:55:37 crc kubenswrapper[4727]: W1121 21:55:37.821985 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf9499f8_a2a6_4d5c_89f7_6119cc985cc2.slice/crio-333cd3c5c9607fa1cf082588c71f1dbf68191d2f49483793ca7fe30eef6c4665 WatchSource:0}: Error finding container 333cd3c5c9607fa1cf082588c71f1dbf68191d2f49483793ca7fe30eef6c4665: Status 404 returned error can't find the container with id 333cd3c5c9607fa1cf082588c71f1dbf68191d2f49483793ca7fe30eef6c4665 Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.824700 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-77cbcd7b74-wdstp"] Nov 21 21:55:37 crc kubenswrapper[4727]: I1121 21:55:37.957979 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5dc877fb6-76mc5"] Nov 21 21:55:37 crc kubenswrapper[4727]: W1121 21:55:37.968166 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67535f0b_486b_4aa4_973e_32c3dd01d514.slice/crio-3e0932b51d4c1dbaa5584f2b3034b945a9df4b04db88e5103a96952148deb9da WatchSource:0}: Error finding container 3e0932b51d4c1dbaa5584f2b3034b945a9df4b04db88e5103a96952148deb9da: Status 404 returned error can't find the container with id 3e0932b51d4c1dbaa5584f2b3034b945a9df4b04db88e5103a96952148deb9da Nov 21 21:55:38 crc kubenswrapper[4727]: I1121 21:55:38.489358 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-58f8c4fcbf-cqdrx" event={"ID":"df9499f8-a2a6-4d5c-89f7-6119cc985cc2","Type":"ContainerStarted","Data":"1c9a3a4289416e2d00f32c0d5a2fd19b46c39fcd0dedacec281c829a50dcf6a4"} Nov 21 21:55:38 crc kubenswrapper[4727]: I1121 21:55:38.489816 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:38 crc kubenswrapper[4727]: I1121 21:55:38.489829 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-58f8c4fcbf-cqdrx" event={"ID":"df9499f8-a2a6-4d5c-89f7-6119cc985cc2","Type":"ContainerStarted","Data":"333cd3c5c9607fa1cf082588c71f1dbf68191d2f49483793ca7fe30eef6c4665"} Nov 21 21:55:38 crc kubenswrapper[4727]: I1121 21:55:38.491549 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77cbcd7b74-wdstp" event={"ID":"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5","Type":"ContainerStarted","Data":"37f7393c968048947b1c2d9f8bbc8e773938fee3cafdc5afef60d63f399f35e4"} Nov 21 21:55:38 crc kubenswrapper[4727]: I1121 21:55:38.492902 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5dc877fb6-76mc5" event={"ID":"67535f0b-486b-4aa4-973e-32c3dd01d514","Type":"ContainerStarted","Data":"3e0932b51d4c1dbaa5584f2b3034b945a9df4b04db88e5103a96952148deb9da"} Nov 21 21:55:38 crc kubenswrapper[4727]: I1121 21:55:38.522760 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-58f8c4fcbf-cqdrx" podStartSLOduration=2.522732101 podStartE2EDuration="2.522732101s" podCreationTimestamp="2025-11-21 21:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 21:55:38.506764339 +0000 UTC m=+6543.692949383" watchObservedRunningTime="2025-11-21 21:55:38.522732101 +0000 UTC m=+6543.708917145" Nov 21 21:55:40 crc kubenswrapper[4727]: I1121 21:55:40.545185 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-77cbcd7b74-wdstp" event={"ID":"3923d481-b8b8-4b23-b8eb-c3c1589cdaf5","Type":"ContainerStarted","Data":"975f1e5d594fd3c2e627f1ab8c2a189cca0db67b6595d46901afd9c9069125fc"} Nov 21 21:55:40 crc kubenswrapper[4727]: I1121 21:55:40.546401 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:40 crc kubenswrapper[4727]: I1121 21:55:40.548476 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5dc877fb6-76mc5" event={"ID":"67535f0b-486b-4aa4-973e-32c3dd01d514","Type":"ContainerStarted","Data":"f416c6e0efd44f294fa5435f690fc39502ed1eed9cda6aeeb22ccea595dee188"} Nov 21 21:55:40 crc kubenswrapper[4727]: I1121 21:55:40.549116 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:40 crc kubenswrapper[4727]: I1121 21:55:40.579207 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-77cbcd7b74-wdstp" podStartSLOduration=3.047410204 podStartE2EDuration="4.579183714s" podCreationTimestamp="2025-11-21 21:55:36 +0000 UTC" firstStartedPulling="2025-11-21 21:55:37.825374646 +0000 UTC m=+6543.011559690" lastFinishedPulling="2025-11-21 21:55:39.357148156 +0000 UTC m=+6544.543333200" observedRunningTime="2025-11-21 21:55:40.564558053 +0000 UTC m=+6545.750743097" watchObservedRunningTime="2025-11-21 21:55:40.579183714 +0000 UTC m=+6545.765368758" Nov 21 21:55:40 crc kubenswrapper[4727]: I1121 21:55:40.600037 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5dc877fb6-76mc5" podStartSLOduration=3.211809179 podStartE2EDuration="4.600015864s" podCreationTimestamp="2025-11-21 21:55:36 +0000 UTC" firstStartedPulling="2025-11-21 21:55:37.971415781 +0000 UTC m=+6543.157600825" lastFinishedPulling="2025-11-21 21:55:39.359622466 +0000 UTC m=+6544.545807510" observedRunningTime="2025-11-21 21:55:40.58526666 +0000 UTC m=+6545.771451704" watchObservedRunningTime="2025-11-21 21:55:40.600015864 +0000 UTC m=+6545.786200898" Nov 21 21:55:40 crc kubenswrapper[4727]: I1121 21:55:40.994932 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 21 21:55:43 crc kubenswrapper[4727]: I1121 21:55:43.595273 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9523e234-9365-483e-8548-43a9c312692e","Type":"ContainerStarted","Data":"8691e7ee08979a8661beb3ad66c5d9cc4bb269ff004e8374673963321a32e12d"} Nov 21 21:55:43 crc kubenswrapper[4727]: I1121 21:55:43.622822 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.475545087 podStartE2EDuration="1m4.622804487s" podCreationTimestamp="2025-11-21 21:54:39 +0000 UTC" firstStartedPulling="2025-11-21 21:54:41.843411739 +0000 UTC m=+6487.029596783" lastFinishedPulling="2025-11-21 21:55:40.990671099 +0000 UTC m=+6546.176856183" observedRunningTime="2025-11-21 21:55:43.61539915 +0000 UTC m=+6548.801584194" watchObservedRunningTime="2025-11-21 21:55:43.622804487 +0000 UTC m=+6548.808989531" Nov 21 21:55:46 crc kubenswrapper[4727]: I1121 21:55:46.500169 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:55:46 crc kubenswrapper[4727]: E1121 21:55:46.501572 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:55:48 crc kubenswrapper[4727]: I1121 21:55:48.926527 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5dc877fb6-76mc5" Nov 21 21:55:48 crc kubenswrapper[4727]: I1121 21:55:48.930035 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-77cbcd7b74-wdstp" Nov 21 21:55:49 crc kubenswrapper[4727]: I1121 21:55:49.027741 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7d97974c6c-c2ppt"] Nov 21 21:55:49 crc kubenswrapper[4727]: I1121 21:55:49.028076 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" podUID="c887cd90-b585-43da-98ff-f5c1e8fc3f70" containerName="heat-cfnapi" containerID="cri-o://c2b6c5c3a83ef7c15d6843ed76abe0cfac2e6f72594e609e298981fb62083225" gracePeriod=60 Nov 21 21:55:49 crc kubenswrapper[4727]: I1121 21:55:49.070582 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5f6b556667-ldkfv"] Nov 21 21:55:49 crc kubenswrapper[4727]: I1121 21:55:49.071255 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5f6b556667-ldkfv" podUID="bcab1adc-1286-4341-aebc-4a4c0821aba8" containerName="heat-api" containerID="cri-o://03596a81d87b234950cc47c01d71b0c36edf7f8e7420dec9e3e84eab02eaae31" gracePeriod=60 Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.726004 4727 generic.go:334] "Generic (PLEG): container finished" podID="bcab1adc-1286-4341-aebc-4a4c0821aba8" containerID="03596a81d87b234950cc47c01d71b0c36edf7f8e7420dec9e3e84eab02eaae31" exitCode=0 Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.727256 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f6b556667-ldkfv" event={"ID":"bcab1adc-1286-4341-aebc-4a4c0821aba8","Type":"ContainerDied","Data":"03596a81d87b234950cc47c01d71b0c36edf7f8e7420dec9e3e84eab02eaae31"} Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.727332 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5f6b556667-ldkfv" event={"ID":"bcab1adc-1286-4341-aebc-4a4c0821aba8","Type":"ContainerDied","Data":"e35938289c869c28e1294f063e9699e17dee55d82dd6367726c9d285572257ac"} Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.727348 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35938289c869c28e1294f063e9699e17dee55d82dd6367726c9d285572257ac" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.728941 4727 generic.go:334] "Generic (PLEG): container finished" podID="c887cd90-b585-43da-98ff-f5c1e8fc3f70" containerID="c2b6c5c3a83ef7c15d6843ed76abe0cfac2e6f72594e609e298981fb62083225" exitCode=0 Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.728986 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" event={"ID":"c887cd90-b585-43da-98ff-f5c1e8fc3f70","Type":"ContainerDied","Data":"c2b6c5c3a83ef7c15d6843ed76abe0cfac2e6f72594e609e298981fb62083225"} Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.779154 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.851325 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-public-tls-certs\") pod \"bcab1adc-1286-4341-aebc-4a4c0821aba8\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.851410 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-combined-ca-bundle\") pod \"bcab1adc-1286-4341-aebc-4a4c0821aba8\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.851509 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-internal-tls-certs\") pod \"bcab1adc-1286-4341-aebc-4a4c0821aba8\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.851547 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data-custom\") pod \"bcab1adc-1286-4341-aebc-4a4c0821aba8\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.851710 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data\") pod \"bcab1adc-1286-4341-aebc-4a4c0821aba8\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.852073 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdr4w\" (UniqueName: \"kubernetes.io/projected/bcab1adc-1286-4341-aebc-4a4c0821aba8-kube-api-access-qdr4w\") pod \"bcab1adc-1286-4341-aebc-4a4c0821aba8\" (UID: \"bcab1adc-1286-4341-aebc-4a4c0821aba8\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.869514 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcab1adc-1286-4341-aebc-4a4c0821aba8-kube-api-access-qdr4w" (OuterVolumeSpecName: "kube-api-access-qdr4w") pod "bcab1adc-1286-4341-aebc-4a4c0821aba8" (UID: "bcab1adc-1286-4341-aebc-4a4c0821aba8"). InnerVolumeSpecName "kube-api-access-qdr4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.869970 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bcab1adc-1286-4341-aebc-4a4c0821aba8" (UID: "bcab1adc-1286-4341-aebc-4a4c0821aba8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.909602 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcab1adc-1286-4341-aebc-4a4c0821aba8" (UID: "bcab1adc-1286-4341-aebc-4a4c0821aba8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.934690 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.955082 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-internal-tls-certs\") pod \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.955177 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-public-tls-certs\") pod \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.955292 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data\") pod \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.955334 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data-custom\") pod \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.955465 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zngdl\" (UniqueName: \"kubernetes.io/projected/c887cd90-b585-43da-98ff-f5c1e8fc3f70-kube-api-access-zngdl\") pod \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.955537 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-combined-ca-bundle\") pod \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\" (UID: \"c887cd90-b585-43da-98ff-f5c1e8fc3f70\") " Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.956099 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.956189 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdr4w\" (UniqueName: \"kubernetes.io/projected/bcab1adc-1286-4341-aebc-4a4c0821aba8-kube-api-access-qdr4w\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.956220 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.964433 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c887cd90-b585-43da-98ff-f5c1e8fc3f70-kube-api-access-zngdl" (OuterVolumeSpecName: "kube-api-access-zngdl") pod "c887cd90-b585-43da-98ff-f5c1e8fc3f70" (UID: "c887cd90-b585-43da-98ff-f5c1e8fc3f70"). InnerVolumeSpecName "kube-api-access-zngdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.964642 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c887cd90-b585-43da-98ff-f5c1e8fc3f70" (UID: "c887cd90-b585-43da-98ff-f5c1e8fc3f70"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.970124 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bcab1adc-1286-4341-aebc-4a4c0821aba8" (UID: "bcab1adc-1286-4341-aebc-4a4c0821aba8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.979007 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data" (OuterVolumeSpecName: "config-data") pod "bcab1adc-1286-4341-aebc-4a4c0821aba8" (UID: "bcab1adc-1286-4341-aebc-4a4c0821aba8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:52 crc kubenswrapper[4727]: I1121 21:55:52.979942 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bcab1adc-1286-4341-aebc-4a4c0821aba8" (UID: "bcab1adc-1286-4341-aebc-4a4c0821aba8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.024940 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c887cd90-b585-43da-98ff-f5c1e8fc3f70" (UID: "c887cd90-b585-43da-98ff-f5c1e8fc3f70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.038845 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c887cd90-b585-43da-98ff-f5c1e8fc3f70" (UID: "c887cd90-b585-43da-98ff-f5c1e8fc3f70"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.047219 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c887cd90-b585-43da-98ff-f5c1e8fc3f70" (UID: "c887cd90-b585-43da-98ff-f5c1e8fc3f70"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.057924 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zngdl\" (UniqueName: \"kubernetes.io/projected/c887cd90-b585-43da-98ff-f5c1e8fc3f70-kube-api-access-zngdl\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.058013 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.058026 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.058036 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.058049 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.058060 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.058070 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcab1adc-1286-4341-aebc-4a4c0821aba8-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.058082 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.066362 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data" (OuterVolumeSpecName: "config-data") pod "c887cd90-b585-43da-98ff-f5c1e8fc3f70" (UID: "c887cd90-b585-43da-98ff-f5c1e8fc3f70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.161428 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c887cd90-b585-43da-98ff-f5c1e8fc3f70-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.742250 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5f6b556667-ldkfv" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.742250 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" event={"ID":"c887cd90-b585-43da-98ff-f5c1e8fc3f70","Type":"ContainerDied","Data":"320a498659ab3252dafe37c71e7ac7f64589e0f4af03de195cf07226fb0f4659"} Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.742925 4727 scope.go:117] "RemoveContainer" containerID="c2b6c5c3a83ef7c15d6843ed76abe0cfac2e6f72594e609e298981fb62083225" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.742296 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d97974c6c-c2ppt" Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.788000 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7d97974c6c-c2ppt"] Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.800411 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7d97974c6c-c2ppt"] Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.818292 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5f6b556667-ldkfv"] Nov 21 21:55:53 crc kubenswrapper[4727]: I1121 21:55:53.827633 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5f6b556667-ldkfv"] Nov 21 21:55:55 crc kubenswrapper[4727]: I1121 21:55:55.525213 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcab1adc-1286-4341-aebc-4a4c0821aba8" path="/var/lib/kubelet/pods/bcab1adc-1286-4341-aebc-4a4c0821aba8/volumes" Nov 21 21:55:55 crc kubenswrapper[4727]: I1121 21:55:55.526949 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c887cd90-b585-43da-98ff-f5c1e8fc3f70" path="/var/lib/kubelet/pods/c887cd90-b585-43da-98ff-f5c1e8fc3f70/volumes" Nov 21 21:55:57 crc kubenswrapper[4727]: I1121 21:55:57.233654 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-58f8c4fcbf-cqdrx" Nov 21 21:55:57 crc kubenswrapper[4727]: I1121 21:55:57.299081 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6974c7d5d8-thpd4"] Nov 21 21:55:57 crc kubenswrapper[4727]: I1121 21:55:57.299647 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6974c7d5d8-thpd4" podUID="c702948c-e12e-47a2-a3b4-4d768c50ab5f" containerName="heat-engine" containerID="cri-o://b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f" gracePeriod=60 Nov 21 21:55:57 crc kubenswrapper[4727]: I1121 21:55:57.500250 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:55:57 crc kubenswrapper[4727]: E1121 21:55:57.501131 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:55:58 crc kubenswrapper[4727]: I1121 21:55:58.852565 4727 scope.go:117] "RemoveContainer" containerID="03596a81d87b234950cc47c01d71b0c36edf7f8e7420dec9e3e84eab02eaae31" Nov 21 21:55:59 crc kubenswrapper[4727]: E1121 21:55:59.799979 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 21:55:59 crc kubenswrapper[4727]: E1121 21:55:59.801717 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 21:55:59 crc kubenswrapper[4727]: E1121 21:55:59.804715 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 21:55:59 crc kubenswrapper[4727]: E1121 21:55:59.804811 4727 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6974c7d5d8-thpd4" podUID="c702948c-e12e-47a2-a3b4-4d768c50ab5f" containerName="heat-engine" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.756561 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-dzf87"] Nov 21 21:56:04 crc kubenswrapper[4727]: E1121 21:56:04.757993 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcab1adc-1286-4341-aebc-4a4c0821aba8" containerName="heat-api" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.758013 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcab1adc-1286-4341-aebc-4a4c0821aba8" containerName="heat-api" Nov 21 21:56:04 crc kubenswrapper[4727]: E1121 21:56:04.758064 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c887cd90-b585-43da-98ff-f5c1e8fc3f70" containerName="heat-cfnapi" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.758075 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c887cd90-b585-43da-98ff-f5c1e8fc3f70" containerName="heat-cfnapi" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.758351 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c887cd90-b585-43da-98ff-f5c1e8fc3f70" containerName="heat-cfnapi" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.758395 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcab1adc-1286-4341-aebc-4a4c0821aba8" containerName="heat-api" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.759437 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.762796 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.783689 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dzf87"] Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.946248 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-scripts\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.946794 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92p6r\" (UniqueName: \"kubernetes.io/projected/297ffe25-5b73-4b41-b197-7016c31cb16b-kube-api-access-92p6r\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.946842 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-combined-ca-bundle\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:04 crc kubenswrapper[4727]: I1121 21:56:04.947018 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-config-data\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.049284 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-config-data\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.049366 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-scripts\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.049481 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p6r\" (UniqueName: \"kubernetes.io/projected/297ffe25-5b73-4b41-b197-7016c31cb16b-kube-api-access-92p6r\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.049530 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-combined-ca-bundle\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.058048 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-combined-ca-bundle\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.058996 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-scripts\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.059313 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-config-data\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.075254 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92p6r\" (UniqueName: \"kubernetes.io/projected/297ffe25-5b73-4b41-b197-7016c31cb16b-kube-api-access-92p6r\") pod \"aodh-db-sync-dzf87\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.084818 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.690665 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dzf87"] Nov 21 21:56:05 crc kubenswrapper[4727]: I1121 21:56:05.932323 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dzf87" event={"ID":"297ffe25-5b73-4b41-b197-7016c31cb16b","Type":"ContainerStarted","Data":"8b6611fcba219786957d4e8ec177ab8805477de5f48c8b3b0ac0b394134e5252"} Nov 21 21:56:09 crc kubenswrapper[4727]: I1121 21:56:09.080328 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 21:56:09 crc kubenswrapper[4727]: E1121 21:56:09.793043 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f is running failed: container process not found" containerID="b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 21:56:09 crc kubenswrapper[4727]: E1121 21:56:09.793794 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f is running failed: container process not found" containerID="b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 21:56:09 crc kubenswrapper[4727]: E1121 21:56:09.794598 4727 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f is running failed: container process not found" containerID="b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 21:56:09 crc kubenswrapper[4727]: E1121 21:56:09.794693 4727 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-6974c7d5d8-thpd4" podUID="c702948c-e12e-47a2-a3b4-4d768c50ab5f" containerName="heat-engine" Nov 21 21:56:10 crc kubenswrapper[4727]: I1121 21:56:10.025683 4727 generic.go:334] "Generic (PLEG): container finished" podID="c702948c-e12e-47a2-a3b4-4d768c50ab5f" containerID="b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f" exitCode=0 Nov 21 21:56:10 crc kubenswrapper[4727]: I1121 21:56:10.025891 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6974c7d5d8-thpd4" event={"ID":"c702948c-e12e-47a2-a3b4-4d768c50ab5f","Type":"ContainerDied","Data":"b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f"} Nov 21 21:56:10 crc kubenswrapper[4727]: I1121 21:56:10.760925 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 21:56:10 crc kubenswrapper[4727]: I1121 21:56:10.942656 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data-custom\") pod \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " Nov 21 21:56:10 crc kubenswrapper[4727]: I1121 21:56:10.942771 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7z79\" (UniqueName: \"kubernetes.io/projected/c702948c-e12e-47a2-a3b4-4d768c50ab5f-kube-api-access-c7z79\") pod \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " Nov 21 21:56:10 crc kubenswrapper[4727]: I1121 21:56:10.942899 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data\") pod \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " Nov 21 21:56:10 crc kubenswrapper[4727]: I1121 21:56:10.942941 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-combined-ca-bundle\") pod \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\" (UID: \"c702948c-e12e-47a2-a3b4-4d768c50ab5f\") " Nov 21 21:56:10 crc kubenswrapper[4727]: I1121 21:56:10.950849 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c702948c-e12e-47a2-a3b4-4d768c50ab5f-kube-api-access-c7z79" (OuterVolumeSpecName: "kube-api-access-c7z79") pod "c702948c-e12e-47a2-a3b4-4d768c50ab5f" (UID: "c702948c-e12e-47a2-a3b4-4d768c50ab5f"). InnerVolumeSpecName "kube-api-access-c7z79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:56:10 crc kubenswrapper[4727]: I1121 21:56:10.951194 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c702948c-e12e-47a2-a3b4-4d768c50ab5f" (UID: "c702948c-e12e-47a2-a3b4-4d768c50ab5f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:10 crc kubenswrapper[4727]: I1121 21:56:10.988276 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c702948c-e12e-47a2-a3b4-4d768c50ab5f" (UID: "c702948c-e12e-47a2-a3b4-4d768c50ab5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.039059 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dzf87" event={"ID":"297ffe25-5b73-4b41-b197-7016c31cb16b","Type":"ContainerStarted","Data":"a609a5125c36ec5a3a0177b3d6cd5fe534501c20e9576bb5e3ffd40437a9e1a0"} Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.040715 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6974c7d5d8-thpd4" event={"ID":"c702948c-e12e-47a2-a3b4-4d768c50ab5f","Type":"ContainerDied","Data":"68253a82991d8808430f12430a15875b4e4ea4afcd896bd74edd5f541d718f24"} Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.040860 4727 scope.go:117] "RemoveContainer" containerID="b73962e9affac724b5d922238a68fd5a4caa7f1ceb51c830e57734b3f013658f" Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.041086 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6974c7d5d8-thpd4" Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.046664 4727 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.046707 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7z79\" (UniqueName: \"kubernetes.io/projected/c702948c-e12e-47a2-a3b4-4d768c50ab5f-kube-api-access-c7z79\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.046721 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.053838 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data" (OuterVolumeSpecName: "config-data") pod "c702948c-e12e-47a2-a3b4-4d768c50ab5f" (UID: "c702948c-e12e-47a2-a3b4-4d768c50ab5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.066832 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-dzf87" podStartSLOduration=2.3465838310000002 podStartE2EDuration="7.06680737s" podCreationTimestamp="2025-11-21 21:56:04 +0000 UTC" firstStartedPulling="2025-11-21 21:56:05.695440605 +0000 UTC m=+6570.881625649" lastFinishedPulling="2025-11-21 21:56:10.415664144 +0000 UTC m=+6575.601849188" observedRunningTime="2025-11-21 21:56:11.058315817 +0000 UTC m=+6576.244500871" watchObservedRunningTime="2025-11-21 21:56:11.06680737 +0000 UTC m=+6576.252992424" Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.149846 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c702948c-e12e-47a2-a3b4-4d768c50ab5f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.412441 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6974c7d5d8-thpd4"] Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.421560 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6974c7d5d8-thpd4"] Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.499475 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:56:11 crc kubenswrapper[4727]: E1121 21:56:11.499782 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:56:11 crc kubenswrapper[4727]: I1121 21:56:11.513848 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c702948c-e12e-47a2-a3b4-4d768c50ab5f" path="/var/lib/kubelet/pods/c702948c-e12e-47a2-a3b4-4d768c50ab5f/volumes" Nov 21 21:56:14 crc kubenswrapper[4727]: I1121 21:56:14.105533 4727 generic.go:334] "Generic (PLEG): container finished" podID="297ffe25-5b73-4b41-b197-7016c31cb16b" containerID="a609a5125c36ec5a3a0177b3d6cd5fe534501c20e9576bb5e3ffd40437a9e1a0" exitCode=0 Nov 21 21:56:14 crc kubenswrapper[4727]: I1121 21:56:14.105638 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dzf87" event={"ID":"297ffe25-5b73-4b41-b197-7016c31cb16b","Type":"ContainerDied","Data":"a609a5125c36ec5a3a0177b3d6cd5fe534501c20e9576bb5e3ffd40437a9e1a0"} Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.603828 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.691347 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-scripts\") pod \"297ffe25-5b73-4b41-b197-7016c31cb16b\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.691691 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92p6r\" (UniqueName: \"kubernetes.io/projected/297ffe25-5b73-4b41-b197-7016c31cb16b-kube-api-access-92p6r\") pod \"297ffe25-5b73-4b41-b197-7016c31cb16b\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.691724 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-combined-ca-bundle\") pod \"297ffe25-5b73-4b41-b197-7016c31cb16b\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.691816 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-config-data\") pod \"297ffe25-5b73-4b41-b197-7016c31cb16b\" (UID: \"297ffe25-5b73-4b41-b197-7016c31cb16b\") " Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.704892 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-scripts" (OuterVolumeSpecName: "scripts") pod "297ffe25-5b73-4b41-b197-7016c31cb16b" (UID: "297ffe25-5b73-4b41-b197-7016c31cb16b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.722388 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297ffe25-5b73-4b41-b197-7016c31cb16b-kube-api-access-92p6r" (OuterVolumeSpecName: "kube-api-access-92p6r") pod "297ffe25-5b73-4b41-b197-7016c31cb16b" (UID: "297ffe25-5b73-4b41-b197-7016c31cb16b"). InnerVolumeSpecName "kube-api-access-92p6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.774159 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-config-data" (OuterVolumeSpecName: "config-data") pod "297ffe25-5b73-4b41-b197-7016c31cb16b" (UID: "297ffe25-5b73-4b41-b197-7016c31cb16b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.797188 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.797245 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92p6r\" (UniqueName: \"kubernetes.io/projected/297ffe25-5b73-4b41-b197-7016c31cb16b-kube-api-access-92p6r\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.797260 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.887944 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "297ffe25-5b73-4b41-b197-7016c31cb16b" (UID: "297ffe25-5b73-4b41-b197-7016c31cb16b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:15 crc kubenswrapper[4727]: I1121 21:56:15.900737 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297ffe25-5b73-4b41-b197-7016c31cb16b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:16 crc kubenswrapper[4727]: I1121 21:56:16.128586 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dzf87" event={"ID":"297ffe25-5b73-4b41-b197-7016c31cb16b","Type":"ContainerDied","Data":"8b6611fcba219786957d4e8ec177ab8805477de5f48c8b3b0ac0b394134e5252"} Nov 21 21:56:16 crc kubenswrapper[4727]: I1121 21:56:16.128644 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b6611fcba219786957d4e8ec177ab8805477de5f48c8b3b0ac0b394134e5252" Nov 21 21:56:16 crc kubenswrapper[4727]: I1121 21:56:16.128649 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dzf87" Nov 21 21:56:19 crc kubenswrapper[4727]: I1121 21:56:19.860213 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 21:56:19 crc kubenswrapper[4727]: I1121 21:56:19.861152 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-api" containerID="cri-o://b321ba7151fcf2ebdc29a561f372eaed7e210db0194e95bf47c23d911f4af23c" gracePeriod=30 Nov 21 21:56:19 crc kubenswrapper[4727]: I1121 21:56:19.861635 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-notifier" containerID="cri-o://c1e6f55e6a61eb65b1905a21ead5000b9e5b47b2fb286c73233b066058f29a73" gracePeriod=30 Nov 21 21:56:19 crc kubenswrapper[4727]: I1121 21:56:19.861695 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-evaluator" containerID="cri-o://1c84862119ef78331528e13cdd5a0f4a615f84ed8851eb27b6bd7e92b8ab6c9b" gracePeriod=30 Nov 21 21:56:19 crc kubenswrapper[4727]: I1121 21:56:19.861742 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-listener" containerID="cri-o://af2bbf0f5d4af6ed81b39a50c6c390a50909d0462289492ce92d0de400f921ae" gracePeriod=30 Nov 21 21:56:21 crc kubenswrapper[4727]: I1121 21:56:21.194637 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9e00f1d6-6b39-4c43-9459-1cc521827332","Type":"ContainerDied","Data":"1c84862119ef78331528e13cdd5a0f4a615f84ed8851eb27b6bd7e92b8ab6c9b"} Nov 21 21:56:21 crc kubenswrapper[4727]: I1121 21:56:21.194579 4727 generic.go:334] "Generic (PLEG): container finished" podID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerID="1c84862119ef78331528e13cdd5a0f4a615f84ed8851eb27b6bd7e92b8ab6c9b" exitCode=0 Nov 21 21:56:23 crc kubenswrapper[4727]: I1121 21:56:23.217508 4727 generic.go:334] "Generic (PLEG): container finished" podID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerID="af2bbf0f5d4af6ed81b39a50c6c390a50909d0462289492ce92d0de400f921ae" exitCode=0 Nov 21 21:56:23 crc kubenswrapper[4727]: I1121 21:56:23.218581 4727 generic.go:334] "Generic (PLEG): container finished" podID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerID="b321ba7151fcf2ebdc29a561f372eaed7e210db0194e95bf47c23d911f4af23c" exitCode=0 Nov 21 21:56:23 crc kubenswrapper[4727]: I1121 21:56:23.217728 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9e00f1d6-6b39-4c43-9459-1cc521827332","Type":"ContainerDied","Data":"af2bbf0f5d4af6ed81b39a50c6c390a50909d0462289492ce92d0de400f921ae"} Nov 21 21:56:23 crc kubenswrapper[4727]: I1121 21:56:23.218633 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9e00f1d6-6b39-4c43-9459-1cc521827332","Type":"ContainerDied","Data":"b321ba7151fcf2ebdc29a561f372eaed7e210db0194e95bf47c23d911f4af23c"} Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.268464 4727 generic.go:334] "Generic (PLEG): container finished" podID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerID="c1e6f55e6a61eb65b1905a21ead5000b9e5b47b2fb286c73233b066058f29a73" exitCode=0 Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.268520 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9e00f1d6-6b39-4c43-9459-1cc521827332","Type":"ContainerDied","Data":"c1e6f55e6a61eb65b1905a21ead5000b9e5b47b2fb286c73233b066058f29a73"} Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.385551 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.444831 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-scripts\") pod \"9e00f1d6-6b39-4c43-9459-1cc521827332\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.445019 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-config-data\") pod \"9e00f1d6-6b39-4c43-9459-1cc521827332\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.445180 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-public-tls-certs\") pod \"9e00f1d6-6b39-4c43-9459-1cc521827332\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.445391 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t94q\" (UniqueName: \"kubernetes.io/projected/9e00f1d6-6b39-4c43-9459-1cc521827332-kube-api-access-9t94q\") pod \"9e00f1d6-6b39-4c43-9459-1cc521827332\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.445473 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-combined-ca-bundle\") pod \"9e00f1d6-6b39-4c43-9459-1cc521827332\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.445538 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-internal-tls-certs\") pod \"9e00f1d6-6b39-4c43-9459-1cc521827332\" (UID: \"9e00f1d6-6b39-4c43-9459-1cc521827332\") " Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.465671 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-scripts" (OuterVolumeSpecName: "scripts") pod "9e00f1d6-6b39-4c43-9459-1cc521827332" (UID: "9e00f1d6-6b39-4c43-9459-1cc521827332"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.483775 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e00f1d6-6b39-4c43-9459-1cc521827332-kube-api-access-9t94q" (OuterVolumeSpecName: "kube-api-access-9t94q") pod "9e00f1d6-6b39-4c43-9459-1cc521827332" (UID: "9e00f1d6-6b39-4c43-9459-1cc521827332"). InnerVolumeSpecName "kube-api-access-9t94q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.549948 4727 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.549989 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t94q\" (UniqueName: \"kubernetes.io/projected/9e00f1d6-6b39-4c43-9459-1cc521827332-kube-api-access-9t94q\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.562917 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9e00f1d6-6b39-4c43-9459-1cc521827332" (UID: "9e00f1d6-6b39-4c43-9459-1cc521827332"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.660810 4727 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.748198 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e00f1d6-6b39-4c43-9459-1cc521827332" (UID: "9e00f1d6-6b39-4c43-9459-1cc521827332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.758185 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9e00f1d6-6b39-4c43-9459-1cc521827332" (UID: "9e00f1d6-6b39-4c43-9459-1cc521827332"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.775339 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.775391 4727 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.855153 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-config-data" (OuterVolumeSpecName: "config-data") pod "9e00f1d6-6b39-4c43-9459-1cc521827332" (UID: "9e00f1d6-6b39-4c43-9459-1cc521827332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 21:56:24 crc kubenswrapper[4727]: I1121 21:56:24.883822 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e00f1d6-6b39-4c43-9459-1cc521827332-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.287266 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9e00f1d6-6b39-4c43-9459-1cc521827332","Type":"ContainerDied","Data":"6eef736a3f6d4b34ee84fe85a2969576998735d4fea691cc1d3a6d6e432ab80e"} Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.287342 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.287377 4727 scope.go:117] "RemoveContainer" containerID="c1e6f55e6a61eb65b1905a21ead5000b9e5b47b2fb286c73233b066058f29a73" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.330424 4727 scope.go:117] "RemoveContainer" containerID="1c84862119ef78331528e13cdd5a0f4a615f84ed8851eb27b6bd7e92b8ab6c9b" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.336624 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.360279 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.376930 4727 scope.go:117] "RemoveContainer" containerID="af2bbf0f5d4af6ed81b39a50c6c390a50909d0462289492ce92d0de400f921ae" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.385079 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 21 21:56:25 crc kubenswrapper[4727]: E1121 21:56:25.385842 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297ffe25-5b73-4b41-b197-7016c31cb16b" containerName="aodh-db-sync" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.385863 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="297ffe25-5b73-4b41-b197-7016c31cb16b" containerName="aodh-db-sync" Nov 21 21:56:25 crc kubenswrapper[4727]: E1121 21:56:25.385878 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-api" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.385886 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-api" Nov 21 21:56:25 crc kubenswrapper[4727]: E1121 21:56:25.385912 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c702948c-e12e-47a2-a3b4-4d768c50ab5f" containerName="heat-engine" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.385918 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c702948c-e12e-47a2-a3b4-4d768c50ab5f" containerName="heat-engine" Nov 21 21:56:25 crc kubenswrapper[4727]: E1121 21:56:25.385938 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-notifier" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.385945 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-notifier" Nov 21 21:56:25 crc kubenswrapper[4727]: E1121 21:56:25.385988 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-evaluator" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.385996 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-evaluator" Nov 21 21:56:25 crc kubenswrapper[4727]: E1121 21:56:25.386011 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-listener" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.386018 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-listener" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.386263 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="297ffe25-5b73-4b41-b197-7016c31cb16b" containerName="aodh-db-sync" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.386285 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-api" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.386299 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-evaluator" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.386313 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c702948c-e12e-47a2-a3b4-4d768c50ab5f" containerName="heat-engine" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.386325 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-listener" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.386341 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" containerName="aodh-notifier" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.392623 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.396742 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.396928 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.397344 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5w2vq" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.397705 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.400543 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.407182 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.476646 4727 scope.go:117] "RemoveContainer" containerID="b321ba7151fcf2ebdc29a561f372eaed7e210db0194e95bf47c23d911f4af23c" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.514352 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.514419 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-internal-tls-certs\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.514467 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgb6r\" (UniqueName: \"kubernetes.io/projected/5c944475-f104-4512-8a86-1139d57d331f-kube-api-access-lgb6r\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.514493 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-public-tls-certs\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.514511 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-config-data\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.514608 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-scripts\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.529718 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e00f1d6-6b39-4c43-9459-1cc521827332" path="/var/lib/kubelet/pods/9e00f1d6-6b39-4c43-9459-1cc521827332/volumes" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.617188 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-internal-tls-certs\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.617321 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgb6r\" (UniqueName: \"kubernetes.io/projected/5c944475-f104-4512-8a86-1139d57d331f-kube-api-access-lgb6r\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.617378 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-public-tls-certs\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.617398 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-config-data\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.617645 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-scripts\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.617728 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.637662 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-public-tls-certs\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.637742 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-scripts\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.638021 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-internal-tls-certs\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.638653 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-config-data\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.643170 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c944475-f104-4512-8a86-1139d57d331f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.645254 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgb6r\" (UniqueName: \"kubernetes.io/projected/5c944475-f104-4512-8a86-1139d57d331f-kube-api-access-lgb6r\") pod \"aodh-0\" (UID: \"5c944475-f104-4512-8a86-1139d57d331f\") " pod="openstack/aodh-0" Nov 21 21:56:25 crc kubenswrapper[4727]: I1121 21:56:25.719665 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 21:56:26 crc kubenswrapper[4727]: I1121 21:56:26.500590 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:56:26 crc kubenswrapper[4727]: E1121 21:56:26.501564 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:56:26 crc kubenswrapper[4727]: I1121 21:56:26.892293 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 21:56:27 crc kubenswrapper[4727]: I1121 21:56:27.313119 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5c944475-f104-4512-8a86-1139d57d331f","Type":"ContainerStarted","Data":"f6ef977c42c4fa2b053f241335e99e2f5f56c69419574c3a1a42d1f41080f1f6"} Nov 21 21:56:28 crc kubenswrapper[4727]: I1121 21:56:28.328684 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5c944475-f104-4512-8a86-1139d57d331f","Type":"ContainerStarted","Data":"7a99a36348dabf1b5e27c6233aca1ea7906ea5f32b2fdb61b398254f3682fdbc"} Nov 21 21:56:29 crc kubenswrapper[4727]: I1121 21:56:29.346044 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5c944475-f104-4512-8a86-1139d57d331f","Type":"ContainerStarted","Data":"f50d8815ff1971c7ed8e3e252f9aa098192ecc10fc3b03e2df71aae7b78dc51f"} Nov 21 21:56:31 crc kubenswrapper[4727]: I1121 21:56:31.378764 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5c944475-f104-4512-8a86-1139d57d331f","Type":"ContainerStarted","Data":"aa82ae1ce0600f018fe4ba0725906cdbf3fac33205f920b6dabec901ae435d40"} Nov 21 21:56:32 crc kubenswrapper[4727]: I1121 21:56:32.393843 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5c944475-f104-4512-8a86-1139d57d331f","Type":"ContainerStarted","Data":"27a235afb90d9dc71eb753d022c58202afeb6e3cd31b4f9e15d6ff05aec8d273"} Nov 21 21:56:32 crc kubenswrapper[4727]: I1121 21:56:32.433694 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.780383975 podStartE2EDuration="7.433668348s" podCreationTimestamp="2025-11-21 21:56:25 +0000 UTC" firstStartedPulling="2025-11-21 21:56:26.860490719 +0000 UTC m=+6592.046675763" lastFinishedPulling="2025-11-21 21:56:31.513775092 +0000 UTC m=+6596.699960136" observedRunningTime="2025-11-21 21:56:32.421281852 +0000 UTC m=+6597.607466886" watchObservedRunningTime="2025-11-21 21:56:32.433668348 +0000 UTC m=+6597.619853392" Nov 21 21:56:41 crc kubenswrapper[4727]: I1121 21:56:41.500542 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:56:41 crc kubenswrapper[4727]: E1121 21:56:41.501895 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:56:54 crc kubenswrapper[4727]: I1121 21:56:54.500163 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:56:54 crc kubenswrapper[4727]: E1121 21:56:54.501637 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:57:08 crc kubenswrapper[4727]: I1121 21:57:08.500877 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:57:08 crc kubenswrapper[4727]: E1121 21:57:08.502023 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:57:21 crc kubenswrapper[4727]: I1121 21:57:21.500339 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:57:21 crc kubenswrapper[4727]: E1121 21:57:21.501871 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:57:32 crc kubenswrapper[4727]: I1121 21:57:32.500603 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:57:32 crc kubenswrapper[4727]: E1121 21:57:32.502128 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:57:47 crc kubenswrapper[4727]: I1121 21:57:47.499480 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:57:47 crc kubenswrapper[4727]: E1121 21:57:47.500471 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:58:01 crc kubenswrapper[4727]: I1121 21:58:01.548572 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:58:01 crc kubenswrapper[4727]: E1121 21:58:01.564148 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:58:13 crc kubenswrapper[4727]: I1121 21:58:13.499252 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:58:13 crc kubenswrapper[4727]: E1121 21:58:13.500939 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:58:24 crc kubenswrapper[4727]: I1121 21:58:24.500408 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:58:24 crc kubenswrapper[4727]: E1121 21:58:24.501643 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:58:35 crc kubenswrapper[4727]: I1121 21:58:35.526827 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:58:35 crc kubenswrapper[4727]: E1121 21:58:35.528294 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 21:58:46 crc kubenswrapper[4727]: I1121 21:58:46.498893 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 21:58:47 crc kubenswrapper[4727]: I1121 21:58:47.150173 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"d924f42ebf9868b5bc79cf8d2700faf02bb037f988baf408255bee572edc3591"} Nov 21 21:58:54 crc kubenswrapper[4727]: I1121 21:58:54.989330 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hbfmf"] Nov 21 21:58:54 crc kubenswrapper[4727]: I1121 21:58:54.996745 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.007793 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbfmf"] Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.084574 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdxw\" (UniqueName: \"kubernetes.io/projected/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-kube-api-access-vvdxw\") pod \"redhat-marketplace-hbfmf\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.084672 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-catalog-content\") pod \"redhat-marketplace-hbfmf\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.085057 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-utilities\") pod \"redhat-marketplace-hbfmf\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.189000 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdxw\" (UniqueName: \"kubernetes.io/projected/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-kube-api-access-vvdxw\") pod \"redhat-marketplace-hbfmf\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.189123 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-catalog-content\") pod \"redhat-marketplace-hbfmf\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.189276 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-utilities\") pod \"redhat-marketplace-hbfmf\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.190584 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-catalog-content\") pod \"redhat-marketplace-hbfmf\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.190647 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-utilities\") pod \"redhat-marketplace-hbfmf\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.223662 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdxw\" (UniqueName: \"kubernetes.io/projected/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-kube-api-access-vvdxw\") pod \"redhat-marketplace-hbfmf\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:55 crc kubenswrapper[4727]: I1121 21:58:55.337657 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:58:56 crc kubenswrapper[4727]: I1121 21:58:56.360287 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbfmf"] Nov 21 21:58:56 crc kubenswrapper[4727]: W1121 21:58:56.365016 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa0e8152_86a7_4cf9_b7c4_18d7df71f7cb.slice/crio-440f1d9c62c39e0d80bec0a9e543069267abcaaf06776499d84547d46b2f6dcd WatchSource:0}: Error finding container 440f1d9c62c39e0d80bec0a9e543069267abcaaf06776499d84547d46b2f6dcd: Status 404 returned error can't find the container with id 440f1d9c62c39e0d80bec0a9e543069267abcaaf06776499d84547d46b2f6dcd Nov 21 21:58:57 crc kubenswrapper[4727]: I1121 21:58:57.282690 4727 generic.go:334] "Generic (PLEG): container finished" podID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerID="a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f" exitCode=0 Nov 21 21:58:57 crc kubenswrapper[4727]: I1121 21:58:57.282799 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbfmf" event={"ID":"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb","Type":"ContainerDied","Data":"a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f"} Nov 21 21:58:57 crc kubenswrapper[4727]: I1121 21:58:57.283144 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbfmf" event={"ID":"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb","Type":"ContainerStarted","Data":"440f1d9c62c39e0d80bec0a9e543069267abcaaf06776499d84547d46b2f6dcd"} Nov 21 21:58:58 crc kubenswrapper[4727]: I1121 21:58:58.297015 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbfmf" event={"ID":"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb","Type":"ContainerStarted","Data":"6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64"} Nov 21 21:58:59 crc kubenswrapper[4727]: I1121 21:58:59.313036 4727 generic.go:334] "Generic (PLEG): container finished" podID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerID="6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64" exitCode=0 Nov 21 21:58:59 crc kubenswrapper[4727]: I1121 21:58:59.313108 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbfmf" event={"ID":"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb","Type":"ContainerDied","Data":"6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64"} Nov 21 21:59:00 crc kubenswrapper[4727]: I1121 21:59:00.323348 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbfmf" event={"ID":"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb","Type":"ContainerStarted","Data":"fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff"} Nov 21 21:59:00 crc kubenswrapper[4727]: I1121 21:59:00.348784 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hbfmf" podStartSLOduration=3.904990748 podStartE2EDuration="6.348760222s" podCreationTimestamp="2025-11-21 21:58:54 +0000 UTC" firstStartedPulling="2025-11-21 21:58:57.286277137 +0000 UTC m=+6742.472462181" lastFinishedPulling="2025-11-21 21:58:59.730046611 +0000 UTC m=+6744.916231655" observedRunningTime="2025-11-21 21:59:00.33874421 +0000 UTC m=+6745.524929254" watchObservedRunningTime="2025-11-21 21:59:00.348760222 +0000 UTC m=+6745.534945266" Nov 21 21:59:05 crc kubenswrapper[4727]: I1121 21:59:05.338056 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:59:05 crc kubenswrapper[4727]: I1121 21:59:05.338627 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:59:05 crc kubenswrapper[4727]: I1121 21:59:05.410205 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:59:05 crc kubenswrapper[4727]: I1121 21:59:05.481401 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:59:05 crc kubenswrapper[4727]: I1121 21:59:05.660694 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbfmf"] Nov 21 21:59:07 crc kubenswrapper[4727]: I1121 21:59:07.404850 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hbfmf" podUID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerName="registry-server" containerID="cri-o://fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff" gracePeriod=2 Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.128424 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.221855 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-utilities\") pod \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.222306 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-catalog-content\") pod \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.222430 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvdxw\" (UniqueName: \"kubernetes.io/projected/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-kube-api-access-vvdxw\") pod \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\" (UID: \"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb\") " Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.222764 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-utilities" (OuterVolumeSpecName: "utilities") pod "aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" (UID: "aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.223602 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.234587 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-kube-api-access-vvdxw" (OuterVolumeSpecName: "kube-api-access-vvdxw") pod "aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" (UID: "aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb"). InnerVolumeSpecName "kube-api-access-vvdxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.246930 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" (UID: "aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.326172 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.326211 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvdxw\" (UniqueName: \"kubernetes.io/projected/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb-kube-api-access-vvdxw\") on node \"crc\" DevicePath \"\"" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.423550 4727 generic.go:334] "Generic (PLEG): container finished" podID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerID="fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff" exitCode=0 Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.423630 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbfmf" event={"ID":"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb","Type":"ContainerDied","Data":"fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff"} Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.423747 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hbfmf" event={"ID":"aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb","Type":"ContainerDied","Data":"440f1d9c62c39e0d80bec0a9e543069267abcaaf06776499d84547d46b2f6dcd"} Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.423783 4727 scope.go:117] "RemoveContainer" containerID="fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.424049 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hbfmf" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.457245 4727 scope.go:117] "RemoveContainer" containerID="6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.485190 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbfmf"] Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.496122 4727 scope.go:117] "RemoveContainer" containerID="a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.504213 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hbfmf"] Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.541991 4727 scope.go:117] "RemoveContainer" containerID="fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff" Nov 21 21:59:08 crc kubenswrapper[4727]: E1121 21:59:08.542616 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff\": container with ID starting with fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff not found: ID does not exist" containerID="fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.542695 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff"} err="failed to get container status \"fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff\": rpc error: code = NotFound desc = could not find container \"fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff\": container with ID starting with fbebbb099ee5743a17c0783e8b98cb1550393ddf972938c5b3ceced0dda378ff not found: ID does not exist" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.542732 4727 scope.go:117] "RemoveContainer" containerID="6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64" Nov 21 21:59:08 crc kubenswrapper[4727]: E1121 21:59:08.543237 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64\": container with ID starting with 6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64 not found: ID does not exist" containerID="6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.543282 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64"} err="failed to get container status \"6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64\": rpc error: code = NotFound desc = could not find container \"6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64\": container with ID starting with 6376f80270a231a313876cdaf9dfa4f4e092836c6dd304f05f58d35ccb458e64 not found: ID does not exist" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.543314 4727 scope.go:117] "RemoveContainer" containerID="a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f" Nov 21 21:59:08 crc kubenswrapper[4727]: E1121 21:59:08.543637 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f\": container with ID starting with a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f not found: ID does not exist" containerID="a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f" Nov 21 21:59:08 crc kubenswrapper[4727]: I1121 21:59:08.543666 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f"} err="failed to get container status \"a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f\": rpc error: code = NotFound desc = could not find container \"a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f\": container with ID starting with a5c312e18f4120425b401e3493b3bd4d5178cb0f7f592d1652bff5c704dc849f not found: ID does not exist" Nov 21 21:59:09 crc kubenswrapper[4727]: I1121 21:59:09.514199 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" path="/var/lib/kubelet/pods/aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb/volumes" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.192500 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn"] Nov 21 22:00:00 crc kubenswrapper[4727]: E1121 22:00:00.194660 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerName="extract-utilities" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.194690 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerName="extract-utilities" Nov 21 22:00:00 crc kubenswrapper[4727]: E1121 22:00:00.194703 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerName="extract-content" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.194712 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerName="extract-content" Nov 21 22:00:00 crc kubenswrapper[4727]: E1121 22:00:00.194810 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerName="registry-server" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.194819 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerName="registry-server" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.195210 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0e8152-86a7-4cf9-b7c4-18d7df71f7cb" containerName="registry-server" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.196808 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.201517 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.202173 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.206384 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn"] Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.243274 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbpzw\" (UniqueName: \"kubernetes.io/projected/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-kube-api-access-fbpzw\") pod \"collect-profiles-29396040-mw8gn\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.243434 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-config-volume\") pod \"collect-profiles-29396040-mw8gn\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.243712 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-secret-volume\") pod \"collect-profiles-29396040-mw8gn\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.346475 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbpzw\" (UniqueName: \"kubernetes.io/projected/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-kube-api-access-fbpzw\") pod \"collect-profiles-29396040-mw8gn\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.346600 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-config-volume\") pod \"collect-profiles-29396040-mw8gn\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.346678 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-secret-volume\") pod \"collect-profiles-29396040-mw8gn\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.347796 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-config-volume\") pod \"collect-profiles-29396040-mw8gn\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.355859 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-secret-volume\") pod \"collect-profiles-29396040-mw8gn\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.381744 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbpzw\" (UniqueName: \"kubernetes.io/projected/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-kube-api-access-fbpzw\") pod \"collect-profiles-29396040-mw8gn\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:00 crc kubenswrapper[4727]: I1121 22:00:00.525243 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:01 crc kubenswrapper[4727]: I1121 22:00:01.072567 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn"] Nov 21 22:00:01 crc kubenswrapper[4727]: I1121 22:00:01.230627 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" event={"ID":"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6","Type":"ContainerStarted","Data":"867b34262cff94e45c234b0900747768208cb106636be4164a1f0b4a129199f0"} Nov 21 22:00:02 crc kubenswrapper[4727]: I1121 22:00:02.244155 4727 generic.go:334] "Generic (PLEG): container finished" podID="88a15a04-1237-49c5-bdd6-1aadf3eb3cb6" containerID="4c259bf7b7899ccd3677aacfc62ecbfc59409e4071e8f2c7379b38bd6fb49873" exitCode=0 Nov 21 22:00:02 crc kubenswrapper[4727]: I1121 22:00:02.244273 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" event={"ID":"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6","Type":"ContainerDied","Data":"4c259bf7b7899ccd3677aacfc62ecbfc59409e4071e8f2c7379b38bd6fb49873"} Nov 21 22:00:03 crc kubenswrapper[4727]: I1121 22:00:03.802707 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:03 crc kubenswrapper[4727]: I1121 22:00:03.894869 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-config-volume\") pod \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " Nov 21 22:00:03 crc kubenswrapper[4727]: I1121 22:00:03.895852 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-secret-volume\") pod \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " Nov 21 22:00:03 crc kubenswrapper[4727]: I1121 22:00:03.896040 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-config-volume" (OuterVolumeSpecName: "config-volume") pod "88a15a04-1237-49c5-bdd6-1aadf3eb3cb6" (UID: "88a15a04-1237-49c5-bdd6-1aadf3eb3cb6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 22:00:03 crc kubenswrapper[4727]: I1121 22:00:03.896221 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbpzw\" (UniqueName: \"kubernetes.io/projected/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-kube-api-access-fbpzw\") pod \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\" (UID: \"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6\") " Nov 21 22:00:03 crc kubenswrapper[4727]: I1121 22:00:03.897832 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 22:00:03 crc kubenswrapper[4727]: I1121 22:00:03.906734 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "88a15a04-1237-49c5-bdd6-1aadf3eb3cb6" (UID: "88a15a04-1237-49c5-bdd6-1aadf3eb3cb6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 22:00:03 crc kubenswrapper[4727]: I1121 22:00:03.906750 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-kube-api-access-fbpzw" (OuterVolumeSpecName: "kube-api-access-fbpzw") pod "88a15a04-1237-49c5-bdd6-1aadf3eb3cb6" (UID: "88a15a04-1237-49c5-bdd6-1aadf3eb3cb6"). InnerVolumeSpecName "kube-api-access-fbpzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:00:04 crc kubenswrapper[4727]: I1121 22:00:04.000261 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 22:00:04 crc kubenswrapper[4727]: I1121 22:00:04.000303 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbpzw\" (UniqueName: \"kubernetes.io/projected/88a15a04-1237-49c5-bdd6-1aadf3eb3cb6-kube-api-access-fbpzw\") on node \"crc\" DevicePath \"\"" Nov 21 22:00:04 crc kubenswrapper[4727]: I1121 22:00:04.269844 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" event={"ID":"88a15a04-1237-49c5-bdd6-1aadf3eb3cb6","Type":"ContainerDied","Data":"867b34262cff94e45c234b0900747768208cb106636be4164a1f0b4a129199f0"} Nov 21 22:00:04 crc kubenswrapper[4727]: I1121 22:00:04.270331 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="867b34262cff94e45c234b0900747768208cb106636be4164a1f0b4a129199f0" Nov 21 22:00:04 crc kubenswrapper[4727]: I1121 22:00:04.269903 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396040-mw8gn" Nov 21 22:00:04 crc kubenswrapper[4727]: I1121 22:00:04.917467 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg"] Nov 21 22:00:04 crc kubenswrapper[4727]: I1121 22:00:04.930057 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395995-wp6hg"] Nov 21 22:00:05 crc kubenswrapper[4727]: I1121 22:00:05.526238 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8353550-dcdf-48fb-87fb-9b708e03b58e" path="/var/lib/kubelet/pods/d8353550-dcdf-48fb-87fb-9b708e03b58e/volumes" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.299565 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r98jm"] Nov 21 22:00:24 crc kubenswrapper[4727]: E1121 22:00:24.301785 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a15a04-1237-49c5-bdd6-1aadf3eb3cb6" containerName="collect-profiles" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.301899 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a15a04-1237-49c5-bdd6-1aadf3eb3cb6" containerName="collect-profiles" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.304405 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a15a04-1237-49c5-bdd6-1aadf3eb3cb6" containerName="collect-profiles" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.316081 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r98jm"] Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.316190 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.503446 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxg4t\" (UniqueName: \"kubernetes.io/projected/158ac92d-a4aa-4d4e-9940-392b64c904c9-kube-api-access-nxg4t\") pod \"redhat-operators-r98jm\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.503631 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-utilities\") pod \"redhat-operators-r98jm\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.503828 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-catalog-content\") pod \"redhat-operators-r98jm\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.607329 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-utilities\") pod \"redhat-operators-r98jm\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.608065 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-utilities\") pod \"redhat-operators-r98jm\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.608670 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-catalog-content\") pod \"redhat-operators-r98jm\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.609027 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxg4t\" (UniqueName: \"kubernetes.io/projected/158ac92d-a4aa-4d4e-9940-392b64c904c9-kube-api-access-nxg4t\") pod \"redhat-operators-r98jm\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.609339 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-catalog-content\") pod \"redhat-operators-r98jm\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.634778 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxg4t\" (UniqueName: \"kubernetes.io/projected/158ac92d-a4aa-4d4e-9940-392b64c904c9-kube-api-access-nxg4t\") pod \"redhat-operators-r98jm\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:24 crc kubenswrapper[4727]: I1121 22:00:24.639837 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:25 crc kubenswrapper[4727]: I1121 22:00:25.100707 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r98jm"] Nov 21 22:00:25 crc kubenswrapper[4727]: I1121 22:00:25.561236 4727 generic.go:334] "Generic (PLEG): container finished" podID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerID="b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed" exitCode=0 Nov 21 22:00:25 crc kubenswrapper[4727]: I1121 22:00:25.561557 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98jm" event={"ID":"158ac92d-a4aa-4d4e-9940-392b64c904c9","Type":"ContainerDied","Data":"b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed"} Nov 21 22:00:25 crc kubenswrapper[4727]: I1121 22:00:25.561605 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98jm" event={"ID":"158ac92d-a4aa-4d4e-9940-392b64c904c9","Type":"ContainerStarted","Data":"05e19c9c7530745a0759633333e3190e4ccb6afe52c1e5eb865d148c111b1c01"} Nov 21 22:00:25 crc kubenswrapper[4727]: I1121 22:00:25.564514 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 22:00:26 crc kubenswrapper[4727]: I1121 22:00:26.578335 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98jm" event={"ID":"158ac92d-a4aa-4d4e-9940-392b64c904c9","Type":"ContainerStarted","Data":"15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212"} Nov 21 22:00:32 crc kubenswrapper[4727]: I1121 22:00:32.190519 4727 generic.go:334] "Generic (PLEG): container finished" podID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerID="15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212" exitCode=0 Nov 21 22:00:32 crc kubenswrapper[4727]: I1121 22:00:32.190590 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98jm" event={"ID":"158ac92d-a4aa-4d4e-9940-392b64c904c9","Type":"ContainerDied","Data":"15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212"} Nov 21 22:00:33 crc kubenswrapper[4727]: I1121 22:00:33.216046 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98jm" event={"ID":"158ac92d-a4aa-4d4e-9940-392b64c904c9","Type":"ContainerStarted","Data":"74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045"} Nov 21 22:00:33 crc kubenswrapper[4727]: I1121 22:00:33.243935 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r98jm" podStartSLOduration=2.116758255 podStartE2EDuration="9.243910244s" podCreationTimestamp="2025-11-21 22:00:24 +0000 UTC" firstStartedPulling="2025-11-21 22:00:25.563207341 +0000 UTC m=+6830.749392385" lastFinishedPulling="2025-11-21 22:00:32.69035932 +0000 UTC m=+6837.876544374" observedRunningTime="2025-11-21 22:00:33.237186552 +0000 UTC m=+6838.423371596" watchObservedRunningTime="2025-11-21 22:00:33.243910244 +0000 UTC m=+6838.430095288" Nov 21 22:00:34 crc kubenswrapper[4727]: I1121 22:00:34.640125 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:34 crc kubenswrapper[4727]: I1121 22:00:34.640754 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:00:35 crc kubenswrapper[4727]: I1121 22:00:35.707504 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r98jm" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="registry-server" probeResult="failure" output=< Nov 21 22:00:35 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:00:35 crc kubenswrapper[4727]: > Nov 21 22:00:45 crc kubenswrapper[4727]: I1121 22:00:45.707561 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r98jm" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="registry-server" probeResult="failure" output=< Nov 21 22:00:45 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:00:45 crc kubenswrapper[4727]: > Nov 21 22:00:55 crc kubenswrapper[4727]: I1121 22:00:55.702657 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r98jm" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="registry-server" probeResult="failure" output=< Nov 21 22:00:55 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:00:55 crc kubenswrapper[4727]: > Nov 21 22:00:59 crc kubenswrapper[4727]: I1121 22:00:59.258179 4727 scope.go:117] "RemoveContainer" containerID="c8cb5300e8470047b2f65a78e066b53c6791b1160363909f75c0fc1e575ebc0a" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.166857 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29396041-g92qj"] Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.169991 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.181514 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396041-g92qj"] Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.363781 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-fernet-keys\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.363829 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-config-data\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.364589 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-combined-ca-bundle\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.364918 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jzvh\" (UniqueName: \"kubernetes.io/projected/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-kube-api-access-9jzvh\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.467460 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jzvh\" (UniqueName: \"kubernetes.io/projected/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-kube-api-access-9jzvh\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.467514 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-fernet-keys\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.467539 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-config-data\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.467624 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-combined-ca-bundle\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.476381 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-config-data\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.477938 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-combined-ca-bundle\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.478398 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-fernet-keys\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.488161 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jzvh\" (UniqueName: \"kubernetes.io/projected/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-kube-api-access-9jzvh\") pod \"keystone-cron-29396041-g92qj\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:00 crc kubenswrapper[4727]: I1121 22:01:00.509187 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:01 crc kubenswrapper[4727]: I1121 22:01:01.029460 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396041-g92qj"] Nov 21 22:01:01 crc kubenswrapper[4727]: I1121 22:01:01.570676 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396041-g92qj" event={"ID":"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc","Type":"ContainerStarted","Data":"a82e91dfbb7e5d7103e60399810312bb41d6aba101756f6dcaf30bb64748034c"} Nov 21 22:01:01 crc kubenswrapper[4727]: I1121 22:01:01.571055 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396041-g92qj" event={"ID":"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc","Type":"ContainerStarted","Data":"eaf7b7a442e3aadbe86fc0764a07a475470cfc853d41b28b3b93bea7e3de4644"} Nov 21 22:01:01 crc kubenswrapper[4727]: I1121 22:01:01.597696 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29396041-g92qj" podStartSLOduration=1.5976761449999999 podStartE2EDuration="1.597676145s" podCreationTimestamp="2025-11-21 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 22:01:01.585458999 +0000 UTC m=+6866.771644053" watchObservedRunningTime="2025-11-21 22:01:01.597676145 +0000 UTC m=+6866.783861199" Nov 21 22:01:04 crc kubenswrapper[4727]: I1121 22:01:04.729307 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:01:04 crc kubenswrapper[4727]: I1121 22:01:04.789148 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:01:04 crc kubenswrapper[4727]: I1121 22:01:04.987525 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r98jm"] Nov 21 22:01:05 crc kubenswrapper[4727]: I1121 22:01:05.647366 4727 generic.go:334] "Generic (PLEG): container finished" podID="bb7ab91c-96fc-4420-bd6d-e6840b74e1fc" containerID="a82e91dfbb7e5d7103e60399810312bb41d6aba101756f6dcaf30bb64748034c" exitCode=0 Nov 21 22:01:05 crc kubenswrapper[4727]: I1121 22:01:05.647495 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396041-g92qj" event={"ID":"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc","Type":"ContainerDied","Data":"a82e91dfbb7e5d7103e60399810312bb41d6aba101756f6dcaf30bb64748034c"} Nov 21 22:01:06 crc kubenswrapper[4727]: I1121 22:01:06.662678 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r98jm" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="registry-server" containerID="cri-o://74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045" gracePeriod=2 Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.233224 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.241559 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.340620 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-combined-ca-bundle\") pod \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.340979 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-catalog-content\") pod \"158ac92d-a4aa-4d4e-9940-392b64c904c9\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.341719 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-config-data\") pod \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.341835 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-utilities\") pod \"158ac92d-a4aa-4d4e-9940-392b64c904c9\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.342041 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jzvh\" (UniqueName: \"kubernetes.io/projected/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-kube-api-access-9jzvh\") pod \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.342144 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxg4t\" (UniqueName: \"kubernetes.io/projected/158ac92d-a4aa-4d4e-9940-392b64c904c9-kube-api-access-nxg4t\") pod \"158ac92d-a4aa-4d4e-9940-392b64c904c9\" (UID: \"158ac92d-a4aa-4d4e-9940-392b64c904c9\") " Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.342432 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-fernet-keys\") pod \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\" (UID: \"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc\") " Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.343503 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-utilities" (OuterVolumeSpecName: "utilities") pod "158ac92d-a4aa-4d4e-9940-392b64c904c9" (UID: "158ac92d-a4aa-4d4e-9940-392b64c904c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.345638 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.348169 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bb7ab91c-96fc-4420-bd6d-e6840b74e1fc" (UID: "bb7ab91c-96fc-4420-bd6d-e6840b74e1fc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.350874 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158ac92d-a4aa-4d4e-9940-392b64c904c9-kube-api-access-nxg4t" (OuterVolumeSpecName: "kube-api-access-nxg4t") pod "158ac92d-a4aa-4d4e-9940-392b64c904c9" (UID: "158ac92d-a4aa-4d4e-9940-392b64c904c9"). InnerVolumeSpecName "kube-api-access-nxg4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.356291 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-kube-api-access-9jzvh" (OuterVolumeSpecName: "kube-api-access-9jzvh") pod "bb7ab91c-96fc-4420-bd6d-e6840b74e1fc" (UID: "bb7ab91c-96fc-4420-bd6d-e6840b74e1fc"). InnerVolumeSpecName "kube-api-access-9jzvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.386481 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb7ab91c-96fc-4420-bd6d-e6840b74e1fc" (UID: "bb7ab91c-96fc-4420-bd6d-e6840b74e1fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.415125 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-config-data" (OuterVolumeSpecName: "config-data") pod "bb7ab91c-96fc-4420-bd6d-e6840b74e1fc" (UID: "bb7ab91c-96fc-4420-bd6d-e6840b74e1fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.445065 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "158ac92d-a4aa-4d4e-9940-392b64c904c9" (UID: "158ac92d-a4aa-4d4e-9940-392b64c904c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.447994 4727 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.448032 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/158ac92d-a4aa-4d4e-9940-392b64c904c9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.448048 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.448062 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jzvh\" (UniqueName: \"kubernetes.io/projected/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-kube-api-access-9jzvh\") on node \"crc\" DevicePath \"\"" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.448078 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxg4t\" (UniqueName: \"kubernetes.io/projected/158ac92d-a4aa-4d4e-9940-392b64c904c9-kube-api-access-nxg4t\") on node \"crc\" DevicePath \"\"" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.448090 4727 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb7ab91c-96fc-4420-bd6d-e6840b74e1fc-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.676719 4727 generic.go:334] "Generic (PLEG): container finished" podID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerID="74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045" exitCode=0 Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.676774 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98jm" event={"ID":"158ac92d-a4aa-4d4e-9940-392b64c904c9","Type":"ContainerDied","Data":"74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045"} Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.676826 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r98jm" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.676853 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r98jm" event={"ID":"158ac92d-a4aa-4d4e-9940-392b64c904c9","Type":"ContainerDied","Data":"05e19c9c7530745a0759633333e3190e4ccb6afe52c1e5eb865d148c111b1c01"} Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.676896 4727 scope.go:117] "RemoveContainer" containerID="74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.678802 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396041-g92qj" event={"ID":"bb7ab91c-96fc-4420-bd6d-e6840b74e1fc","Type":"ContainerDied","Data":"eaf7b7a442e3aadbe86fc0764a07a475470cfc853d41b28b3b93bea7e3de4644"} Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.678848 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaf7b7a442e3aadbe86fc0764a07a475470cfc853d41b28b3b93bea7e3de4644" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.678930 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396041-g92qj" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.710330 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r98jm"] Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.719940 4727 scope.go:117] "RemoveContainer" containerID="15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.730599 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r98jm"] Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.746770 4727 scope.go:117] "RemoveContainer" containerID="b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.771517 4727 scope.go:117] "RemoveContainer" containerID="74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045" Nov 21 22:01:07 crc kubenswrapper[4727]: E1121 22:01:07.772196 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045\": container with ID starting with 74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045 not found: ID does not exist" containerID="74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.772246 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045"} err="failed to get container status \"74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045\": rpc error: code = NotFound desc = could not find container \"74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045\": container with ID starting with 74186aa9bf74f8c58f2937c3af4f05631634aedbd5124b60368808dc1f57a045 not found: ID does not exist" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.772274 4727 scope.go:117] "RemoveContainer" containerID="15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212" Nov 21 22:01:07 crc kubenswrapper[4727]: E1121 22:01:07.772574 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212\": container with ID starting with 15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212 not found: ID does not exist" containerID="15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.772616 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212"} err="failed to get container status \"15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212\": rpc error: code = NotFound desc = could not find container \"15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212\": container with ID starting with 15a2dff28dbfd95bdd43309c5aa11572e082ac10b4216704400e69899c150212 not found: ID does not exist" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.772644 4727 scope.go:117] "RemoveContainer" containerID="b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed" Nov 21 22:01:07 crc kubenswrapper[4727]: E1121 22:01:07.772871 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed\": container with ID starting with b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed not found: ID does not exist" containerID="b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed" Nov 21 22:01:07 crc kubenswrapper[4727]: I1121 22:01:07.772897 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed"} err="failed to get container status \"b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed\": rpc error: code = NotFound desc = could not find container \"b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed\": container with ID starting with b354463605752cc14ee66e58abbf37df8a29684b49278ca4bd5acd27020798ed not found: ID does not exist" Nov 21 22:01:09 crc kubenswrapper[4727]: I1121 22:01:09.512810 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" path="/var/lib/kubelet/pods/158ac92d-a4aa-4d4e-9940-392b64c904c9/volumes" Nov 21 22:01:13 crc kubenswrapper[4727]: I1121 22:01:13.335696 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:01:13 crc kubenswrapper[4727]: I1121 22:01:13.336820 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:01:43 crc kubenswrapper[4727]: I1121 22:01:43.335532 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:01:43 crc kubenswrapper[4727]: I1121 22:01:43.336324 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:02:13 crc kubenswrapper[4727]: I1121 22:02:13.335254 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:02:13 crc kubenswrapper[4727]: I1121 22:02:13.336368 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:02:13 crc kubenswrapper[4727]: I1121 22:02:13.336448 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 22:02:13 crc kubenswrapper[4727]: I1121 22:02:13.337679 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d924f42ebf9868b5bc79cf8d2700faf02bb037f988baf408255bee572edc3591"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 22:02:13 crc kubenswrapper[4727]: I1121 22:02:13.337806 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://d924f42ebf9868b5bc79cf8d2700faf02bb037f988baf408255bee572edc3591" gracePeriod=600 Nov 21 22:02:13 crc kubenswrapper[4727]: I1121 22:02:13.589759 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="d924f42ebf9868b5bc79cf8d2700faf02bb037f988baf408255bee572edc3591" exitCode=0 Nov 21 22:02:13 crc kubenswrapper[4727]: I1121 22:02:13.590045 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"d924f42ebf9868b5bc79cf8d2700faf02bb037f988baf408255bee572edc3591"} Nov 21 22:02:13 crc kubenswrapper[4727]: I1121 22:02:13.590307 4727 scope.go:117] "RemoveContainer" containerID="976f8c3044d73ef3cad2da771f4d7a96e16e3a29c21be656da90340aeebddb83" Nov 21 22:02:14 crc kubenswrapper[4727]: I1121 22:02:14.615539 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d"} Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.298179 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hsrx5"] Nov 21 22:02:23 crc kubenswrapper[4727]: E1121 22:02:23.299657 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="extract-content" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.299682 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="extract-content" Nov 21 22:02:23 crc kubenswrapper[4727]: E1121 22:02:23.299720 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="extract-utilities" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.299731 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="extract-utilities" Nov 21 22:02:23 crc kubenswrapper[4727]: E1121 22:02:23.299775 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="registry-server" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.299787 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="registry-server" Nov 21 22:02:23 crc kubenswrapper[4727]: E1121 22:02:23.299824 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7ab91c-96fc-4420-bd6d-e6840b74e1fc" containerName="keystone-cron" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.299835 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7ab91c-96fc-4420-bd6d-e6840b74e1fc" containerName="keystone-cron" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.300344 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="158ac92d-a4aa-4d4e-9940-392b64c904c9" containerName="registry-server" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.300401 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb7ab91c-96fc-4420-bd6d-e6840b74e1fc" containerName="keystone-cron" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.302874 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.318671 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hsrx5"] Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.319449 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmvp\" (UniqueName: \"kubernetes.io/projected/85340074-0c5a-4e9a-98da-a50a819a24dc-kube-api-access-blmvp\") pod \"certified-operators-hsrx5\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.319535 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-catalog-content\") pod \"certified-operators-hsrx5\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.319591 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-utilities\") pod \"certified-operators-hsrx5\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.421968 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blmvp\" (UniqueName: \"kubernetes.io/projected/85340074-0c5a-4e9a-98da-a50a819a24dc-kube-api-access-blmvp\") pod \"certified-operators-hsrx5\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.422040 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-catalog-content\") pod \"certified-operators-hsrx5\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.422107 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-utilities\") pod \"certified-operators-hsrx5\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.422570 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-utilities\") pod \"certified-operators-hsrx5\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.422906 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-catalog-content\") pod \"certified-operators-hsrx5\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.446629 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blmvp\" (UniqueName: \"kubernetes.io/projected/85340074-0c5a-4e9a-98da-a50a819a24dc-kube-api-access-blmvp\") pod \"certified-operators-hsrx5\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:23 crc kubenswrapper[4727]: I1121 22:02:23.634864 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:24 crc kubenswrapper[4727]: I1121 22:02:24.111566 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hsrx5"] Nov 21 22:02:24 crc kubenswrapper[4727]: I1121 22:02:24.749587 4727 generic.go:334] "Generic (PLEG): container finished" podID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerID="9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3" exitCode=0 Nov 21 22:02:24 crc kubenswrapper[4727]: I1121 22:02:24.749723 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsrx5" event={"ID":"85340074-0c5a-4e9a-98da-a50a819a24dc","Type":"ContainerDied","Data":"9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3"} Nov 21 22:02:24 crc kubenswrapper[4727]: I1121 22:02:24.750020 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsrx5" event={"ID":"85340074-0c5a-4e9a-98da-a50a819a24dc","Type":"ContainerStarted","Data":"5f0f49e0e238ac85b10ff03c33305b7c4f4bd24b68301ac0c278f86df8b219d6"} Nov 21 22:02:26 crc kubenswrapper[4727]: I1121 22:02:26.777727 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsrx5" event={"ID":"85340074-0c5a-4e9a-98da-a50a819a24dc","Type":"ContainerStarted","Data":"f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc"} Nov 21 22:02:27 crc kubenswrapper[4727]: I1121 22:02:27.794322 4727 generic.go:334] "Generic (PLEG): container finished" podID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerID="f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc" exitCode=0 Nov 21 22:02:27 crc kubenswrapper[4727]: I1121 22:02:27.794886 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsrx5" event={"ID":"85340074-0c5a-4e9a-98da-a50a819a24dc","Type":"ContainerDied","Data":"f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc"} Nov 21 22:02:28 crc kubenswrapper[4727]: I1121 22:02:28.805727 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsrx5" event={"ID":"85340074-0c5a-4e9a-98da-a50a819a24dc","Type":"ContainerStarted","Data":"bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059"} Nov 21 22:02:28 crc kubenswrapper[4727]: I1121 22:02:28.835754 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hsrx5" podStartSLOduration=2.413311137 podStartE2EDuration="5.83572359s" podCreationTimestamp="2025-11-21 22:02:23 +0000 UTC" firstStartedPulling="2025-11-21 22:02:24.753465249 +0000 UTC m=+6949.939650303" lastFinishedPulling="2025-11-21 22:02:28.175877672 +0000 UTC m=+6953.362062756" observedRunningTime="2025-11-21 22:02:28.828629358 +0000 UTC m=+6954.014814392" watchObservedRunningTime="2025-11-21 22:02:28.83572359 +0000 UTC m=+6954.021908634" Nov 21 22:02:33 crc kubenswrapper[4727]: I1121 22:02:33.635319 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:33 crc kubenswrapper[4727]: I1121 22:02:33.635758 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:33 crc kubenswrapper[4727]: I1121 22:02:33.706816 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:33 crc kubenswrapper[4727]: I1121 22:02:33.940497 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:34 crc kubenswrapper[4727]: I1121 22:02:34.003725 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hsrx5"] Nov 21 22:02:35 crc kubenswrapper[4727]: I1121 22:02:35.892234 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hsrx5" podUID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerName="registry-server" containerID="cri-o://bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059" gracePeriod=2 Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.530429 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.643714 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-utilities\") pod \"85340074-0c5a-4e9a-98da-a50a819a24dc\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.643990 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blmvp\" (UniqueName: \"kubernetes.io/projected/85340074-0c5a-4e9a-98da-a50a819a24dc-kube-api-access-blmvp\") pod \"85340074-0c5a-4e9a-98da-a50a819a24dc\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.644058 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-catalog-content\") pod \"85340074-0c5a-4e9a-98da-a50a819a24dc\" (UID: \"85340074-0c5a-4e9a-98da-a50a819a24dc\") " Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.644700 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-utilities" (OuterVolumeSpecName: "utilities") pod "85340074-0c5a-4e9a-98da-a50a819a24dc" (UID: "85340074-0c5a-4e9a-98da-a50a819a24dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.654188 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85340074-0c5a-4e9a-98da-a50a819a24dc-kube-api-access-blmvp" (OuterVolumeSpecName: "kube-api-access-blmvp") pod "85340074-0c5a-4e9a-98da-a50a819a24dc" (UID: "85340074-0c5a-4e9a-98da-a50a819a24dc"). InnerVolumeSpecName "kube-api-access-blmvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.705945 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85340074-0c5a-4e9a-98da-a50a819a24dc" (UID: "85340074-0c5a-4e9a-98da-a50a819a24dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.746230 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.746264 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blmvp\" (UniqueName: \"kubernetes.io/projected/85340074-0c5a-4e9a-98da-a50a819a24dc-kube-api-access-blmvp\") on node \"crc\" DevicePath \"\"" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.746274 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85340074-0c5a-4e9a-98da-a50a819a24dc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.904010 4727 generic.go:334] "Generic (PLEG): container finished" podID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerID="bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059" exitCode=0 Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.904051 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsrx5" event={"ID":"85340074-0c5a-4e9a-98da-a50a819a24dc","Type":"ContainerDied","Data":"bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059"} Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.904083 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hsrx5" event={"ID":"85340074-0c5a-4e9a-98da-a50a819a24dc","Type":"ContainerDied","Data":"5f0f49e0e238ac85b10ff03c33305b7c4f4bd24b68301ac0c278f86df8b219d6"} Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.904101 4727 scope.go:117] "RemoveContainer" containerID="bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.904135 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hsrx5" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.956919 4727 scope.go:117] "RemoveContainer" containerID="f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.962074 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hsrx5"] Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.978337 4727 scope.go:117] "RemoveContainer" containerID="9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3" Nov 21 22:02:36 crc kubenswrapper[4727]: I1121 22:02:36.983506 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hsrx5"] Nov 21 22:02:37 crc kubenswrapper[4727]: E1121 22:02:37.021404 4727 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85340074_0c5a_4e9a_98da_a50a819a24dc.slice/crio-5f0f49e0e238ac85b10ff03c33305b7c4f4bd24b68301ac0c278f86df8b219d6\": RecentStats: unable to find data in memory cache]" Nov 21 22:02:37 crc kubenswrapper[4727]: I1121 22:02:37.047187 4727 scope.go:117] "RemoveContainer" containerID="bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059" Nov 21 22:02:37 crc kubenswrapper[4727]: E1121 22:02:37.048738 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059\": container with ID starting with bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059 not found: ID does not exist" containerID="bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059" Nov 21 22:02:37 crc kubenswrapper[4727]: I1121 22:02:37.048768 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059"} err="failed to get container status \"bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059\": rpc error: code = NotFound desc = could not find container \"bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059\": container with ID starting with bac6e87b25da555814f6a75564d4edbda2b5332e203f63f42e606bd685aae059 not found: ID does not exist" Nov 21 22:02:37 crc kubenswrapper[4727]: I1121 22:02:37.048789 4727 scope.go:117] "RemoveContainer" containerID="f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc" Nov 21 22:02:37 crc kubenswrapper[4727]: E1121 22:02:37.049725 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc\": container with ID starting with f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc not found: ID does not exist" containerID="f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc" Nov 21 22:02:37 crc kubenswrapper[4727]: I1121 22:02:37.049741 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc"} err="failed to get container status \"f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc\": rpc error: code = NotFound desc = could not find container \"f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc\": container with ID starting with f58eb14cad9ab2b345db3afd3d329038687bccf6afc97674786663af9ababcdc not found: ID does not exist" Nov 21 22:02:37 crc kubenswrapper[4727]: I1121 22:02:37.049761 4727 scope.go:117] "RemoveContainer" containerID="9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3" Nov 21 22:02:37 crc kubenswrapper[4727]: E1121 22:02:37.050365 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3\": container with ID starting with 9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3 not found: ID does not exist" containerID="9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3" Nov 21 22:02:37 crc kubenswrapper[4727]: I1121 22:02:37.050414 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3"} err="failed to get container status \"9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3\": rpc error: code = NotFound desc = could not find container \"9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3\": container with ID starting with 9004713f22c22a9cd92a2aeb96cc091f05fe3e4f17fefdf922a04ab7d2ef03f3 not found: ID does not exist" Nov 21 22:02:37 crc kubenswrapper[4727]: I1121 22:02:37.515150 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85340074-0c5a-4e9a-98da-a50a819a24dc" path="/var/lib/kubelet/pods/85340074-0c5a-4e9a-98da-a50a819a24dc/volumes" Nov 21 22:04:13 crc kubenswrapper[4727]: I1121 22:04:13.335088 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:04:13 crc kubenswrapper[4727]: I1121 22:04:13.335590 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.526549 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cn5mw"] Nov 21 22:04:34 crc kubenswrapper[4727]: E1121 22:04:34.530673 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerName="registry-server" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.530703 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerName="registry-server" Nov 21 22:04:34 crc kubenswrapper[4727]: E1121 22:04:34.530748 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerName="extract-utilities" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.530757 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerName="extract-utilities" Nov 21 22:04:34 crc kubenswrapper[4727]: E1121 22:04:34.530774 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerName="extract-content" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.530783 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerName="extract-content" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.531125 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="85340074-0c5a-4e9a-98da-a50a819a24dc" containerName="registry-server" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.534944 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.568522 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-catalog-content\") pod \"community-operators-cn5mw\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.568821 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-utilities\") pod \"community-operators-cn5mw\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.569200 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fklv\" (UniqueName: \"kubernetes.io/projected/c0244da3-c45a-415a-b21e-ce1d87b1f53c-kube-api-access-8fklv\") pod \"community-operators-cn5mw\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.574483 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn5mw"] Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.670240 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-catalog-content\") pod \"community-operators-cn5mw\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.670365 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-utilities\") pod \"community-operators-cn5mw\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.670487 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fklv\" (UniqueName: \"kubernetes.io/projected/c0244da3-c45a-415a-b21e-ce1d87b1f53c-kube-api-access-8fklv\") pod \"community-operators-cn5mw\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.670874 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-utilities\") pod \"community-operators-cn5mw\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.670875 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-catalog-content\") pod \"community-operators-cn5mw\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.692618 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fklv\" (UniqueName: \"kubernetes.io/projected/c0244da3-c45a-415a-b21e-ce1d87b1f53c-kube-api-access-8fklv\") pod \"community-operators-cn5mw\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:34 crc kubenswrapper[4727]: I1121 22:04:34.871290 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:35 crc kubenswrapper[4727]: I1121 22:04:35.330093 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn5mw"] Nov 21 22:04:35 crc kubenswrapper[4727]: I1121 22:04:35.475359 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn5mw" event={"ID":"c0244da3-c45a-415a-b21e-ce1d87b1f53c","Type":"ContainerStarted","Data":"38a7266777a2237f2526573d160bc8bda4e77bff83abfb2e79d3a7549862261d"} Nov 21 22:04:36 crc kubenswrapper[4727]: I1121 22:04:36.488920 4727 generic.go:334] "Generic (PLEG): container finished" podID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerID="12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd" exitCode=0 Nov 21 22:04:36 crc kubenswrapper[4727]: I1121 22:04:36.489013 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn5mw" event={"ID":"c0244da3-c45a-415a-b21e-ce1d87b1f53c","Type":"ContainerDied","Data":"12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd"} Nov 21 22:04:37 crc kubenswrapper[4727]: I1121 22:04:37.513989 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn5mw" event={"ID":"c0244da3-c45a-415a-b21e-ce1d87b1f53c","Type":"ContainerStarted","Data":"50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d"} Nov 21 22:04:39 crc kubenswrapper[4727]: I1121 22:04:39.541514 4727 generic.go:334] "Generic (PLEG): container finished" podID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerID="50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d" exitCode=0 Nov 21 22:04:39 crc kubenswrapper[4727]: I1121 22:04:39.542084 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn5mw" event={"ID":"c0244da3-c45a-415a-b21e-ce1d87b1f53c","Type":"ContainerDied","Data":"50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d"} Nov 21 22:04:40 crc kubenswrapper[4727]: I1121 22:04:40.554984 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn5mw" event={"ID":"c0244da3-c45a-415a-b21e-ce1d87b1f53c","Type":"ContainerStarted","Data":"1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7"} Nov 21 22:04:40 crc kubenswrapper[4727]: I1121 22:04:40.578665 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cn5mw" podStartSLOduration=3.10968986 podStartE2EDuration="6.578642861s" podCreationTimestamp="2025-11-21 22:04:34 +0000 UTC" firstStartedPulling="2025-11-21 22:04:36.492902646 +0000 UTC m=+7081.679087720" lastFinishedPulling="2025-11-21 22:04:39.961855637 +0000 UTC m=+7085.148040721" observedRunningTime="2025-11-21 22:04:40.571694073 +0000 UTC m=+7085.757879117" watchObservedRunningTime="2025-11-21 22:04:40.578642861 +0000 UTC m=+7085.764827915" Nov 21 22:04:43 crc kubenswrapper[4727]: I1121 22:04:43.335296 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:04:43 crc kubenswrapper[4727]: I1121 22:04:43.335764 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:04:44 crc kubenswrapper[4727]: I1121 22:04:44.872283 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:44 crc kubenswrapper[4727]: I1121 22:04:44.872621 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:44 crc kubenswrapper[4727]: I1121 22:04:44.965879 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:45 crc kubenswrapper[4727]: I1121 22:04:45.705565 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:45 crc kubenswrapper[4727]: I1121 22:04:45.794188 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn5mw"] Nov 21 22:04:47 crc kubenswrapper[4727]: I1121 22:04:47.642839 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cn5mw" podUID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerName="registry-server" containerID="cri-o://1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7" gracePeriod=2 Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.195468 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.348244 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-catalog-content\") pod \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.348333 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-utilities\") pod \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.348397 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fklv\" (UniqueName: \"kubernetes.io/projected/c0244da3-c45a-415a-b21e-ce1d87b1f53c-kube-api-access-8fklv\") pod \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\" (UID: \"c0244da3-c45a-415a-b21e-ce1d87b1f53c\") " Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.349629 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-utilities" (OuterVolumeSpecName: "utilities") pod "c0244da3-c45a-415a-b21e-ce1d87b1f53c" (UID: "c0244da3-c45a-415a-b21e-ce1d87b1f53c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.360991 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0244da3-c45a-415a-b21e-ce1d87b1f53c-kube-api-access-8fklv" (OuterVolumeSpecName: "kube-api-access-8fklv") pod "c0244da3-c45a-415a-b21e-ce1d87b1f53c" (UID: "c0244da3-c45a-415a-b21e-ce1d87b1f53c"). InnerVolumeSpecName "kube-api-access-8fklv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.406403 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0244da3-c45a-415a-b21e-ce1d87b1f53c" (UID: "c0244da3-c45a-415a-b21e-ce1d87b1f53c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.451161 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fklv\" (UniqueName: \"kubernetes.io/projected/c0244da3-c45a-415a-b21e-ce1d87b1f53c-kube-api-access-8fklv\") on node \"crc\" DevicePath \"\"" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.451201 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.451230 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0244da3-c45a-415a-b21e-ce1d87b1f53c-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.661468 4727 generic.go:334] "Generic (PLEG): container finished" podID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerID="1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7" exitCode=0 Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.661538 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn5mw" event={"ID":"c0244da3-c45a-415a-b21e-ce1d87b1f53c","Type":"ContainerDied","Data":"1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7"} Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.661779 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn5mw" event={"ID":"c0244da3-c45a-415a-b21e-ce1d87b1f53c","Type":"ContainerDied","Data":"38a7266777a2237f2526573d160bc8bda4e77bff83abfb2e79d3a7549862261d"} Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.661805 4727 scope.go:117] "RemoveContainer" containerID="1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.661556 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn5mw" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.691628 4727 scope.go:117] "RemoveContainer" containerID="50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.714655 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn5mw"] Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.729202 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cn5mw"] Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.736457 4727 scope.go:117] "RemoveContainer" containerID="12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.802638 4727 scope.go:117] "RemoveContainer" containerID="1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7" Nov 21 22:04:48 crc kubenswrapper[4727]: E1121 22:04:48.803207 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7\": container with ID starting with 1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7 not found: ID does not exist" containerID="1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.803238 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7"} err="failed to get container status \"1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7\": rpc error: code = NotFound desc = could not find container \"1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7\": container with ID starting with 1b9ad107c21ef4f534898de0a34eb604f4dbb84cba7a29b3e49e241f8587a9a7 not found: ID does not exist" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.803327 4727 scope.go:117] "RemoveContainer" containerID="50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d" Nov 21 22:04:48 crc kubenswrapper[4727]: E1121 22:04:48.803582 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d\": container with ID starting with 50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d not found: ID does not exist" containerID="50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.803603 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d"} err="failed to get container status \"50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d\": rpc error: code = NotFound desc = could not find container \"50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d\": container with ID starting with 50821a3b7ae0506fccd194188abf3247adc158d5e047b3e4031eda1d133db51d not found: ID does not exist" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.803616 4727 scope.go:117] "RemoveContainer" containerID="12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd" Nov 21 22:04:48 crc kubenswrapper[4727]: E1121 22:04:48.804167 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd\": container with ID starting with 12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd not found: ID does not exist" containerID="12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd" Nov 21 22:04:48 crc kubenswrapper[4727]: I1121 22:04:48.804188 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd"} err="failed to get container status \"12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd\": rpc error: code = NotFound desc = could not find container \"12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd\": container with ID starting with 12af1014a674995515e0ede5e79db2ef3f43449263a466804c02d522fb77c3bd not found: ID does not exist" Nov 21 22:04:49 crc kubenswrapper[4727]: I1121 22:04:49.520212 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" path="/var/lib/kubelet/pods/c0244da3-c45a-415a-b21e-ce1d87b1f53c/volumes" Nov 21 22:05:13 crc kubenswrapper[4727]: I1121 22:05:13.335189 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:05:13 crc kubenswrapper[4727]: I1121 22:05:13.336062 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:05:13 crc kubenswrapper[4727]: I1121 22:05:13.336152 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 22:05:13 crc kubenswrapper[4727]: I1121 22:05:13.337324 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 22:05:13 crc kubenswrapper[4727]: I1121 22:05:13.337451 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" gracePeriod=600 Nov 21 22:05:13 crc kubenswrapper[4727]: E1121 22:05:13.467770 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:05:14 crc kubenswrapper[4727]: I1121 22:05:14.009928 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" exitCode=0 Nov 21 22:05:14 crc kubenswrapper[4727]: I1121 22:05:14.010019 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d"} Nov 21 22:05:14 crc kubenswrapper[4727]: I1121 22:05:14.010285 4727 scope.go:117] "RemoveContainer" containerID="d924f42ebf9868b5bc79cf8d2700faf02bb037f988baf408255bee572edc3591" Nov 21 22:05:14 crc kubenswrapper[4727]: I1121 22:05:14.011200 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:05:14 crc kubenswrapper[4727]: E1121 22:05:14.011476 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:05:29 crc kubenswrapper[4727]: I1121 22:05:29.501195 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:05:29 crc kubenswrapper[4727]: E1121 22:05:29.502669 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:05:35 crc kubenswrapper[4727]: I1121 22:05:35.090909 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-n2gcs"] Nov 21 22:05:35 crc kubenswrapper[4727]: I1121 22:05:35.104752 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-n2gcs"] Nov 21 22:05:35 crc kubenswrapper[4727]: I1121 22:05:35.524622 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b384df-d8e9-4cb3-8d32-a2e1f4672d9e" path="/var/lib/kubelet/pods/37b384df-d8e9-4cb3-8d32-a2e1f4672d9e/volumes" Nov 21 22:05:44 crc kubenswrapper[4727]: I1121 22:05:44.500151 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:05:44 crc kubenswrapper[4727]: E1121 22:05:44.501295 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:05:59 crc kubenswrapper[4727]: I1121 22:05:59.500299 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:05:59 crc kubenswrapper[4727]: E1121 22:05:59.501802 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:05:59 crc kubenswrapper[4727]: I1121 22:05:59.549222 4727 scope.go:117] "RemoveContainer" containerID="f3c76fd0e1b7f7705fbfdba091891faffe160684f6d5835abc798180181ba511" Nov 21 22:06:14 crc kubenswrapper[4727]: I1121 22:06:14.499565 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:06:14 crc kubenswrapper[4727]: E1121 22:06:14.500618 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:06:16 crc kubenswrapper[4727]: I1121 22:06:16.070123 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-dzf87"] Nov 21 22:06:16 crc kubenswrapper[4727]: I1121 22:06:16.082053 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-dzf87"] Nov 21 22:06:17 crc kubenswrapper[4727]: I1121 22:06:17.526549 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="297ffe25-5b73-4b41-b197-7016c31cb16b" path="/var/lib/kubelet/pods/297ffe25-5b73-4b41-b197-7016c31cb16b/volumes" Nov 21 22:06:28 crc kubenswrapper[4727]: I1121 22:06:28.499909 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:06:28 crc kubenswrapper[4727]: E1121 22:06:28.501076 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:06:42 crc kubenswrapper[4727]: I1121 22:06:42.500226 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:06:42 crc kubenswrapper[4727]: E1121 22:06:42.501334 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:06:56 crc kubenswrapper[4727]: I1121 22:06:56.501214 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:06:56 crc kubenswrapper[4727]: E1121 22:06:56.502315 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:06:59 crc kubenswrapper[4727]: I1121 22:06:59.653490 4727 scope.go:117] "RemoveContainer" containerID="a609a5125c36ec5a3a0177b3d6cd5fe534501c20e9576bb5e3ffd40437a9e1a0" Nov 21 22:07:09 crc kubenswrapper[4727]: I1121 22:07:09.500507 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:07:09 crc kubenswrapper[4727]: E1121 22:07:09.501632 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:07:24 crc kubenswrapper[4727]: I1121 22:07:24.499297 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:07:24 crc kubenswrapper[4727]: E1121 22:07:24.500508 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:07:35 crc kubenswrapper[4727]: I1121 22:07:35.517291 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:07:35 crc kubenswrapper[4727]: E1121 22:07:35.518373 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:07:48 crc kubenswrapper[4727]: I1121 22:07:48.500039 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:07:48 crc kubenswrapper[4727]: E1121 22:07:48.501951 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:08:03 crc kubenswrapper[4727]: I1121 22:08:03.502502 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:08:03 crc kubenswrapper[4727]: E1121 22:08:03.504404 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:08:14 crc kubenswrapper[4727]: I1121 22:08:14.500180 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:08:14 crc kubenswrapper[4727]: E1121 22:08:14.502114 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:08:25 crc kubenswrapper[4727]: I1121 22:08:25.515323 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:08:25 crc kubenswrapper[4727]: E1121 22:08:25.517127 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:08:37 crc kubenswrapper[4727]: I1121 22:08:37.499542 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:08:37 crc kubenswrapper[4727]: E1121 22:08:37.500859 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:08:48 crc kubenswrapper[4727]: I1121 22:08:48.500053 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:08:48 crc kubenswrapper[4727]: E1121 22:08:48.501221 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:08:59 crc kubenswrapper[4727]: I1121 22:08:59.499606 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:08:59 crc kubenswrapper[4727]: E1121 22:08:59.500698 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:09:13 crc kubenswrapper[4727]: I1121 22:09:13.499860 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:09:13 crc kubenswrapper[4727]: E1121 22:09:13.501413 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.173158 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6tjts"] Nov 21 22:09:25 crc kubenswrapper[4727]: E1121 22:09:25.174258 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerName="extract-utilities" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.174274 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerName="extract-utilities" Nov 21 22:09:25 crc kubenswrapper[4727]: E1121 22:09:25.174301 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerName="registry-server" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.174313 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerName="registry-server" Nov 21 22:09:25 crc kubenswrapper[4727]: E1121 22:09:25.174329 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerName="extract-content" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.174338 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerName="extract-content" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.174646 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0244da3-c45a-415a-b21e-ce1d87b1f53c" containerName="registry-server" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.176759 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.192227 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6tjts"] Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.341813 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-utilities\") pod \"redhat-marketplace-6tjts\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.342346 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-catalog-content\") pod \"redhat-marketplace-6tjts\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.342734 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xnw5\" (UniqueName: \"kubernetes.io/projected/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-kube-api-access-7xnw5\") pod \"redhat-marketplace-6tjts\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.445523 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xnw5\" (UniqueName: \"kubernetes.io/projected/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-kube-api-access-7xnw5\") pod \"redhat-marketplace-6tjts\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.446163 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-utilities\") pod \"redhat-marketplace-6tjts\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.446325 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-catalog-content\") pod \"redhat-marketplace-6tjts\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.446825 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-catalog-content\") pod \"redhat-marketplace-6tjts\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.447049 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-utilities\") pod \"redhat-marketplace-6tjts\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.467448 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xnw5\" (UniqueName: \"kubernetes.io/projected/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-kube-api-access-7xnw5\") pod \"redhat-marketplace-6tjts\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:25 crc kubenswrapper[4727]: I1121 22:09:25.506009 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:26 crc kubenswrapper[4727]: W1121 22:09:26.002050 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a16783a_8d1e_4684_af75_d3dcfb4b18e4.slice/crio-c51178c17140ea37c4ae5aec4e4e99704a8c5ba8e4ec2e4ff5316a7025538aa0 WatchSource:0}: Error finding container c51178c17140ea37c4ae5aec4e4e99704a8c5ba8e4ec2e4ff5316a7025538aa0: Status 404 returned error can't find the container with id c51178c17140ea37c4ae5aec4e4e99704a8c5ba8e4ec2e4ff5316a7025538aa0 Nov 21 22:09:26 crc kubenswrapper[4727]: I1121 22:09:26.002990 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6tjts"] Nov 21 22:09:26 crc kubenswrapper[4727]: I1121 22:09:26.773600 4727 generic.go:334] "Generic (PLEG): container finished" podID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerID="4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39" exitCode=0 Nov 21 22:09:26 crc kubenswrapper[4727]: I1121 22:09:26.773852 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tjts" event={"ID":"6a16783a-8d1e-4684-af75-d3dcfb4b18e4","Type":"ContainerDied","Data":"4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39"} Nov 21 22:09:26 crc kubenswrapper[4727]: I1121 22:09:26.773878 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tjts" event={"ID":"6a16783a-8d1e-4684-af75-d3dcfb4b18e4","Type":"ContainerStarted","Data":"c51178c17140ea37c4ae5aec4e4e99704a8c5ba8e4ec2e4ff5316a7025538aa0"} Nov 21 22:09:26 crc kubenswrapper[4727]: I1121 22:09:26.776093 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 22:09:27 crc kubenswrapper[4727]: I1121 22:09:27.500248 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:09:27 crc kubenswrapper[4727]: E1121 22:09:27.501013 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:09:27 crc kubenswrapper[4727]: I1121 22:09:27.842021 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tjts" event={"ID":"6a16783a-8d1e-4684-af75-d3dcfb4b18e4","Type":"ContainerStarted","Data":"4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f"} Nov 21 22:09:29 crc kubenswrapper[4727]: I1121 22:09:29.868809 4727 generic.go:334] "Generic (PLEG): container finished" podID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerID="4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f" exitCode=0 Nov 21 22:09:29 crc kubenswrapper[4727]: I1121 22:09:29.868879 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tjts" event={"ID":"6a16783a-8d1e-4684-af75-d3dcfb4b18e4","Type":"ContainerDied","Data":"4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f"} Nov 21 22:09:30 crc kubenswrapper[4727]: I1121 22:09:30.896423 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tjts" event={"ID":"6a16783a-8d1e-4684-af75-d3dcfb4b18e4","Type":"ContainerStarted","Data":"f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f"} Nov 21 22:09:30 crc kubenswrapper[4727]: I1121 22:09:30.929365 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6tjts" podStartSLOduration=2.289685547 podStartE2EDuration="5.929340568s" podCreationTimestamp="2025-11-21 22:09:25 +0000 UTC" firstStartedPulling="2025-11-21 22:09:26.775885904 +0000 UTC m=+7371.962070938" lastFinishedPulling="2025-11-21 22:09:30.415540915 +0000 UTC m=+7375.601725959" observedRunningTime="2025-11-21 22:09:30.922491272 +0000 UTC m=+7376.108676316" watchObservedRunningTime="2025-11-21 22:09:30.929340568 +0000 UTC m=+7376.115525622" Nov 21 22:09:35 crc kubenswrapper[4727]: I1121 22:09:35.527044 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:35 crc kubenswrapper[4727]: I1121 22:09:35.528214 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:35 crc kubenswrapper[4727]: I1121 22:09:35.592583 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:36 crc kubenswrapper[4727]: I1121 22:09:36.059700 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:36 crc kubenswrapper[4727]: I1121 22:09:36.134449 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6tjts"] Nov 21 22:09:37 crc kubenswrapper[4727]: I1121 22:09:37.998585 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6tjts" podUID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerName="registry-server" containerID="cri-o://f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f" gracePeriod=2 Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.499783 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:09:38 crc kubenswrapper[4727]: E1121 22:09:38.500361 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.684024 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.803004 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xnw5\" (UniqueName: \"kubernetes.io/projected/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-kube-api-access-7xnw5\") pod \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.803115 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-utilities\") pod \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.803240 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-catalog-content\") pod \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\" (UID: \"6a16783a-8d1e-4684-af75-d3dcfb4b18e4\") " Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.803869 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-utilities" (OuterVolumeSpecName: "utilities") pod "6a16783a-8d1e-4684-af75-d3dcfb4b18e4" (UID: "6a16783a-8d1e-4684-af75-d3dcfb4b18e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.820515 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a16783a-8d1e-4684-af75-d3dcfb4b18e4" (UID: "6a16783a-8d1e-4684-af75-d3dcfb4b18e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.826425 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-kube-api-access-7xnw5" (OuterVolumeSpecName: "kube-api-access-7xnw5") pod "6a16783a-8d1e-4684-af75-d3dcfb4b18e4" (UID: "6a16783a-8d1e-4684-af75-d3dcfb4b18e4"). InnerVolumeSpecName "kube-api-access-7xnw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.906314 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.906363 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:09:38 crc kubenswrapper[4727]: I1121 22:09:38.906389 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xnw5\" (UniqueName: \"kubernetes.io/projected/6a16783a-8d1e-4684-af75-d3dcfb4b18e4-kube-api-access-7xnw5\") on node \"crc\" DevicePath \"\"" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.015874 4727 generic.go:334] "Generic (PLEG): container finished" podID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerID="f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f" exitCode=0 Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.015931 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tjts" event={"ID":"6a16783a-8d1e-4684-af75-d3dcfb4b18e4","Type":"ContainerDied","Data":"f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f"} Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.016013 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tjts" event={"ID":"6a16783a-8d1e-4684-af75-d3dcfb4b18e4","Type":"ContainerDied","Data":"c51178c17140ea37c4ae5aec4e4e99704a8c5ba8e4ec2e4ff5316a7025538aa0"} Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.016018 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6tjts" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.016046 4727 scope.go:117] "RemoveContainer" containerID="f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.059679 4727 scope.go:117] "RemoveContainer" containerID="4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.071366 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6tjts"] Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.089368 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6tjts"] Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.090954 4727 scope.go:117] "RemoveContainer" containerID="4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.149436 4727 scope.go:117] "RemoveContainer" containerID="f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f" Nov 21 22:09:39 crc kubenswrapper[4727]: E1121 22:09:39.150035 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f\": container with ID starting with f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f not found: ID does not exist" containerID="f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.150077 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f"} err="failed to get container status \"f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f\": rpc error: code = NotFound desc = could not find container \"f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f\": container with ID starting with f82fe3d501971d1dac49d321d89a5d803615c2a7a1233bb9fa5f63275ecceb6f not found: ID does not exist" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.150110 4727 scope.go:117] "RemoveContainer" containerID="4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f" Nov 21 22:09:39 crc kubenswrapper[4727]: E1121 22:09:39.150618 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f\": container with ID starting with 4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f not found: ID does not exist" containerID="4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.150658 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f"} err="failed to get container status \"4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f\": rpc error: code = NotFound desc = could not find container \"4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f\": container with ID starting with 4b15590b07a0d3de6de9008281bb425d9a23def38eb68b932f32c55e78de997f not found: ID does not exist" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.150690 4727 scope.go:117] "RemoveContainer" containerID="4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39" Nov 21 22:09:39 crc kubenswrapper[4727]: E1121 22:09:39.151232 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39\": container with ID starting with 4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39 not found: ID does not exist" containerID="4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.151294 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39"} err="failed to get container status \"4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39\": rpc error: code = NotFound desc = could not find container \"4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39\": container with ID starting with 4c0d6bc2da9ba6791124a6bdcce980230f1c3598287e457935775baee3f53d39 not found: ID does not exist" Nov 21 22:09:39 crc kubenswrapper[4727]: I1121 22:09:39.522020 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" path="/var/lib/kubelet/pods/6a16783a-8d1e-4684-af75-d3dcfb4b18e4/volumes" Nov 21 22:09:52 crc kubenswrapper[4727]: I1121 22:09:52.500393 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:09:52 crc kubenswrapper[4727]: E1121 22:09:52.501509 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:10:05 crc kubenswrapper[4727]: I1121 22:10:05.514626 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:10:05 crc kubenswrapper[4727]: E1121 22:10:05.516630 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:10:16 crc kubenswrapper[4727]: I1121 22:10:16.500844 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:10:17 crc kubenswrapper[4727]: I1121 22:10:17.582600 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"b4a3f569da27b516215f907523b106a29bef051168fc2c26df382fff2b9c5db7"} Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.082398 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fk76k"] Nov 21 22:10:27 crc kubenswrapper[4727]: E1121 22:10:27.084255 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerName="extract-content" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.084289 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerName="extract-content" Nov 21 22:10:27 crc kubenswrapper[4727]: E1121 22:10:27.084355 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerName="registry-server" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.084374 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerName="registry-server" Nov 21 22:10:27 crc kubenswrapper[4727]: E1121 22:10:27.084438 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerName="extract-utilities" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.084457 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerName="extract-utilities" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.085033 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a16783a-8d1e-4684-af75-d3dcfb4b18e4" containerName="registry-server" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.090101 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.094684 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fk76k"] Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.275585 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-catalog-content\") pod \"redhat-operators-fk76k\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.275908 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txj9z\" (UniqueName: \"kubernetes.io/projected/e191f857-1130-4d37-a182-9e40c1af4380-kube-api-access-txj9z\") pod \"redhat-operators-fk76k\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.276091 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-utilities\") pod \"redhat-operators-fk76k\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.378930 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-catalog-content\") pod \"redhat-operators-fk76k\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.379266 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txj9z\" (UniqueName: \"kubernetes.io/projected/e191f857-1130-4d37-a182-9e40c1af4380-kube-api-access-txj9z\") pod \"redhat-operators-fk76k\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.379320 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-utilities\") pod \"redhat-operators-fk76k\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.379744 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-catalog-content\") pod \"redhat-operators-fk76k\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.379871 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-utilities\") pod \"redhat-operators-fk76k\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.401568 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txj9z\" (UniqueName: \"kubernetes.io/projected/e191f857-1130-4d37-a182-9e40c1af4380-kube-api-access-txj9z\") pod \"redhat-operators-fk76k\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.427760 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:27 crc kubenswrapper[4727]: I1121 22:10:27.907939 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fk76k"] Nov 21 22:10:28 crc kubenswrapper[4727]: I1121 22:10:28.736120 4727 generic.go:334] "Generic (PLEG): container finished" podID="e191f857-1130-4d37-a182-9e40c1af4380" containerID="591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc" exitCode=0 Nov 21 22:10:28 crc kubenswrapper[4727]: I1121 22:10:28.736187 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk76k" event={"ID":"e191f857-1130-4d37-a182-9e40c1af4380","Type":"ContainerDied","Data":"591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc"} Nov 21 22:10:28 crc kubenswrapper[4727]: I1121 22:10:28.736225 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk76k" event={"ID":"e191f857-1130-4d37-a182-9e40c1af4380","Type":"ContainerStarted","Data":"43e892d1c0d6cde381ec84d048ec05fa55ef81b3216c909c09f5e606db0358c2"} Nov 21 22:10:30 crc kubenswrapper[4727]: I1121 22:10:30.761195 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk76k" event={"ID":"e191f857-1130-4d37-a182-9e40c1af4380","Type":"ContainerStarted","Data":"25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd"} Nov 21 22:10:35 crc kubenswrapper[4727]: I1121 22:10:35.845404 4727 generic.go:334] "Generic (PLEG): container finished" podID="e191f857-1130-4d37-a182-9e40c1af4380" containerID="25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd" exitCode=0 Nov 21 22:10:35 crc kubenswrapper[4727]: I1121 22:10:35.845540 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk76k" event={"ID":"e191f857-1130-4d37-a182-9e40c1af4380","Type":"ContainerDied","Data":"25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd"} Nov 21 22:10:36 crc kubenswrapper[4727]: I1121 22:10:36.860577 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk76k" event={"ID":"e191f857-1130-4d37-a182-9e40c1af4380","Type":"ContainerStarted","Data":"4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33"} Nov 21 22:10:36 crc kubenswrapper[4727]: I1121 22:10:36.896751 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fk76k" podStartSLOduration=2.329438406 podStartE2EDuration="9.896733305s" podCreationTimestamp="2025-11-21 22:10:27 +0000 UTC" firstStartedPulling="2025-11-21 22:10:28.739340207 +0000 UTC m=+7433.925525271" lastFinishedPulling="2025-11-21 22:10:36.306635086 +0000 UTC m=+7441.492820170" observedRunningTime="2025-11-21 22:10:36.884081209 +0000 UTC m=+7442.070266253" watchObservedRunningTime="2025-11-21 22:10:36.896733305 +0000 UTC m=+7442.082918349" Nov 21 22:10:37 crc kubenswrapper[4727]: I1121 22:10:37.428386 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:37 crc kubenswrapper[4727]: I1121 22:10:37.428443 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:10:38 crc kubenswrapper[4727]: I1121 22:10:38.499972 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fk76k" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="registry-server" probeResult="failure" output=< Nov 21 22:10:38 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:10:38 crc kubenswrapper[4727]: > Nov 21 22:10:48 crc kubenswrapper[4727]: I1121 22:10:48.490680 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fk76k" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="registry-server" probeResult="failure" output=< Nov 21 22:10:48 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:10:48 crc kubenswrapper[4727]: > Nov 21 22:10:58 crc kubenswrapper[4727]: I1121 22:10:58.497343 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fk76k" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="registry-server" probeResult="failure" output=< Nov 21 22:10:58 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:10:58 crc kubenswrapper[4727]: > Nov 21 22:11:07 crc kubenswrapper[4727]: I1121 22:11:07.535343 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:11:07 crc kubenswrapper[4727]: I1121 22:11:07.581418 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:11:07 crc kubenswrapper[4727]: I1121 22:11:07.779267 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fk76k"] Nov 21 22:11:09 crc kubenswrapper[4727]: I1121 22:11:09.264744 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fk76k" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="registry-server" containerID="cri-o://4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33" gracePeriod=2 Nov 21 22:11:09 crc kubenswrapper[4727]: I1121 22:11:09.831868 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:11:09 crc kubenswrapper[4727]: I1121 22:11:09.935150 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-catalog-content\") pod \"e191f857-1130-4d37-a182-9e40c1af4380\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " Nov 21 22:11:09 crc kubenswrapper[4727]: I1121 22:11:09.936682 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txj9z\" (UniqueName: \"kubernetes.io/projected/e191f857-1130-4d37-a182-9e40c1af4380-kube-api-access-txj9z\") pod \"e191f857-1130-4d37-a182-9e40c1af4380\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " Nov 21 22:11:09 crc kubenswrapper[4727]: I1121 22:11:09.936871 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-utilities\") pod \"e191f857-1130-4d37-a182-9e40c1af4380\" (UID: \"e191f857-1130-4d37-a182-9e40c1af4380\") " Nov 21 22:11:09 crc kubenswrapper[4727]: I1121 22:11:09.937974 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-utilities" (OuterVolumeSpecName: "utilities") pod "e191f857-1130-4d37-a182-9e40c1af4380" (UID: "e191f857-1130-4d37-a182-9e40c1af4380"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:11:09 crc kubenswrapper[4727]: I1121 22:11:09.948123 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e191f857-1130-4d37-a182-9e40c1af4380-kube-api-access-txj9z" (OuterVolumeSpecName: "kube-api-access-txj9z") pod "e191f857-1130-4d37-a182-9e40c1af4380" (UID: "e191f857-1130-4d37-a182-9e40c1af4380"). InnerVolumeSpecName "kube-api-access-txj9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.034839 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e191f857-1130-4d37-a182-9e40c1af4380" (UID: "e191f857-1130-4d37-a182-9e40c1af4380"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.040856 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.040903 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e191f857-1130-4d37-a182-9e40c1af4380-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.040918 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txj9z\" (UniqueName: \"kubernetes.io/projected/e191f857-1130-4d37-a182-9e40c1af4380-kube-api-access-txj9z\") on node \"crc\" DevicePath \"\"" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.281976 4727 generic.go:334] "Generic (PLEG): container finished" podID="e191f857-1130-4d37-a182-9e40c1af4380" containerID="4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33" exitCode=0 Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.282019 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk76k" event={"ID":"e191f857-1130-4d37-a182-9e40c1af4380","Type":"ContainerDied","Data":"4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33"} Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.282046 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fk76k" event={"ID":"e191f857-1130-4d37-a182-9e40c1af4380","Type":"ContainerDied","Data":"43e892d1c0d6cde381ec84d048ec05fa55ef81b3216c909c09f5e606db0358c2"} Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.282062 4727 scope.go:117] "RemoveContainer" containerID="4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.282185 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fk76k" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.329163 4727 scope.go:117] "RemoveContainer" containerID="25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.329914 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fk76k"] Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.351266 4727 scope.go:117] "RemoveContainer" containerID="591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.353216 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fk76k"] Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.594803 4727 scope.go:117] "RemoveContainer" containerID="4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33" Nov 21 22:11:10 crc kubenswrapper[4727]: E1121 22:11:10.595292 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33\": container with ID starting with 4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33 not found: ID does not exist" containerID="4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.595324 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33"} err="failed to get container status \"4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33\": rpc error: code = NotFound desc = could not find container \"4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33\": container with ID starting with 4e2bb152b51071b249d8311c7f560c85e993fbd29d501543987d62f953ff4a33 not found: ID does not exist" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.595346 4727 scope.go:117] "RemoveContainer" containerID="25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd" Nov 21 22:11:10 crc kubenswrapper[4727]: E1121 22:11:10.595712 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd\": container with ID starting with 25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd not found: ID does not exist" containerID="25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.595739 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd"} err="failed to get container status \"25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd\": rpc error: code = NotFound desc = could not find container \"25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd\": container with ID starting with 25d0e95e6294f8f223b0e1ee0aebd9b2cead7a1d026a567b8e3f41307d5478bd not found: ID does not exist" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.595752 4727 scope.go:117] "RemoveContainer" containerID="591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc" Nov 21 22:11:10 crc kubenswrapper[4727]: E1121 22:11:10.595952 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc\": container with ID starting with 591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc not found: ID does not exist" containerID="591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc" Nov 21 22:11:10 crc kubenswrapper[4727]: I1121 22:11:10.595987 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc"} err="failed to get container status \"591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc\": rpc error: code = NotFound desc = could not find container \"591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc\": container with ID starting with 591862a3ec3cd9a7746fd42dae1a87c0701720516778b97e26869b6a5e4e28bc not found: ID does not exist" Nov 21 22:11:11 crc kubenswrapper[4727]: I1121 22:11:11.521062 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e191f857-1130-4d37-a182-9e40c1af4380" path="/var/lib/kubelet/pods/e191f857-1130-4d37-a182-9e40c1af4380/volumes" Nov 21 22:12:43 crc kubenswrapper[4727]: I1121 22:12:43.336133 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:12:43 crc kubenswrapper[4727]: I1121 22:12:43.336715 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:13:13 crc kubenswrapper[4727]: I1121 22:13:13.335677 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:13:13 crc kubenswrapper[4727]: I1121 22:13:13.336244 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:13:43 crc kubenswrapper[4727]: I1121 22:13:43.335874 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:13:43 crc kubenswrapper[4727]: I1121 22:13:43.336594 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:13:43 crc kubenswrapper[4727]: I1121 22:13:43.336656 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 22:13:43 crc kubenswrapper[4727]: I1121 22:13:43.337793 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4a3f569da27b516215f907523b106a29bef051168fc2c26df382fff2b9c5db7"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 22:13:43 crc kubenswrapper[4727]: I1121 22:13:43.337904 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://b4a3f569da27b516215f907523b106a29bef051168fc2c26df382fff2b9c5db7" gracePeriod=600 Nov 21 22:13:43 crc kubenswrapper[4727]: I1121 22:13:43.723567 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="b4a3f569da27b516215f907523b106a29bef051168fc2c26df382fff2b9c5db7" exitCode=0 Nov 21 22:13:43 crc kubenswrapper[4727]: I1121 22:13:43.723632 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"b4a3f569da27b516215f907523b106a29bef051168fc2c26df382fff2b9c5db7"} Nov 21 22:13:43 crc kubenswrapper[4727]: I1121 22:13:43.723906 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc"} Nov 21 22:13:43 crc kubenswrapper[4727]: I1121 22:13:43.723928 4727 scope.go:117] "RemoveContainer" containerID="1d3a2f1c6ffa70c9bd0d4db5cfec82543efb579835ea1abea605a3349157de5d" Nov 21 22:14:48 crc kubenswrapper[4727]: I1121 22:14:48.959778 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dgzgc"] Nov 21 22:14:48 crc kubenswrapper[4727]: E1121 22:14:48.961512 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="extract-utilities" Nov 21 22:14:48 crc kubenswrapper[4727]: I1121 22:14:48.961546 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="extract-utilities" Nov 21 22:14:48 crc kubenswrapper[4727]: E1121 22:14:48.961605 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="extract-content" Nov 21 22:14:48 crc kubenswrapper[4727]: I1121 22:14:48.961628 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="extract-content" Nov 21 22:14:48 crc kubenswrapper[4727]: E1121 22:14:48.961668 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="registry-server" Nov 21 22:14:48 crc kubenswrapper[4727]: I1121 22:14:48.961689 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="registry-server" Nov 21 22:14:48 crc kubenswrapper[4727]: I1121 22:14:48.962230 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="e191f857-1130-4d37-a182-9e40c1af4380" containerName="registry-server" Nov 21 22:14:48 crc kubenswrapper[4727]: I1121 22:14:48.966206 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:48 crc kubenswrapper[4727]: I1121 22:14:48.978593 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dgzgc"] Nov 21 22:14:48 crc kubenswrapper[4727]: I1121 22:14:48.983881 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-utilities\") pod \"community-operators-dgzgc\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:48 crc kubenswrapper[4727]: I1121 22:14:48.984092 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-catalog-content\") pod \"community-operators-dgzgc\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:48 crc kubenswrapper[4727]: I1121 22:14:48.984222 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhqdz\" (UniqueName: \"kubernetes.io/projected/0be2eb0c-0e51-4165-955a-0af4dffb05c6-kube-api-access-mhqdz\") pod \"community-operators-dgzgc\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:49 crc kubenswrapper[4727]: I1121 22:14:49.092933 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-utilities\") pod \"community-operators-dgzgc\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:49 crc kubenswrapper[4727]: I1121 22:14:49.093010 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-catalog-content\") pod \"community-operators-dgzgc\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:49 crc kubenswrapper[4727]: I1121 22:14:49.093105 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhqdz\" (UniqueName: \"kubernetes.io/projected/0be2eb0c-0e51-4165-955a-0af4dffb05c6-kube-api-access-mhqdz\") pod \"community-operators-dgzgc\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:49 crc kubenswrapper[4727]: I1121 22:14:49.093833 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-utilities\") pod \"community-operators-dgzgc\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:49 crc kubenswrapper[4727]: I1121 22:14:49.094309 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-catalog-content\") pod \"community-operators-dgzgc\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:49 crc kubenswrapper[4727]: I1121 22:14:49.125928 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhqdz\" (UniqueName: \"kubernetes.io/projected/0be2eb0c-0e51-4165-955a-0af4dffb05c6-kube-api-access-mhqdz\") pod \"community-operators-dgzgc\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:49 crc kubenswrapper[4727]: I1121 22:14:49.291220 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:49 crc kubenswrapper[4727]: I1121 22:14:49.847489 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dgzgc"] Nov 21 22:14:50 crc kubenswrapper[4727]: I1121 22:14:50.653051 4727 generic.go:334] "Generic (PLEG): container finished" podID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerID="008976cbb47449bec69ef2239150f045fbab2e18796b15af69278cf7ab824cf4" exitCode=0 Nov 21 22:14:50 crc kubenswrapper[4727]: I1121 22:14:50.653114 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgzgc" event={"ID":"0be2eb0c-0e51-4165-955a-0af4dffb05c6","Type":"ContainerDied","Data":"008976cbb47449bec69ef2239150f045fbab2e18796b15af69278cf7ab824cf4"} Nov 21 22:14:50 crc kubenswrapper[4727]: I1121 22:14:50.653679 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgzgc" event={"ID":"0be2eb0c-0e51-4165-955a-0af4dffb05c6","Type":"ContainerStarted","Data":"ac1445c46d08cd1ccb715d09c94d0efc80b8d783289a3d0d21be1135be343080"} Nov 21 22:14:50 crc kubenswrapper[4727]: I1121 22:14:50.657138 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 22:14:51 crc kubenswrapper[4727]: I1121 22:14:51.667273 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgzgc" event={"ID":"0be2eb0c-0e51-4165-955a-0af4dffb05c6","Type":"ContainerStarted","Data":"dcb24d4707d5f0111b587af1ca7075f53868b4b7a1a3b1e4ffc5dc2839946e31"} Nov 21 22:14:53 crc kubenswrapper[4727]: I1121 22:14:53.699098 4727 generic.go:334] "Generic (PLEG): container finished" podID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerID="dcb24d4707d5f0111b587af1ca7075f53868b4b7a1a3b1e4ffc5dc2839946e31" exitCode=0 Nov 21 22:14:53 crc kubenswrapper[4727]: I1121 22:14:53.699233 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgzgc" event={"ID":"0be2eb0c-0e51-4165-955a-0af4dffb05c6","Type":"ContainerDied","Data":"dcb24d4707d5f0111b587af1ca7075f53868b4b7a1a3b1e4ffc5dc2839946e31"} Nov 21 22:14:54 crc kubenswrapper[4727]: I1121 22:14:54.719382 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgzgc" event={"ID":"0be2eb0c-0e51-4165-955a-0af4dffb05c6","Type":"ContainerStarted","Data":"5a920219646bb768af4081f81b0ba42857d74dffa0e0121cc87c4f89530fd955"} Nov 21 22:14:54 crc kubenswrapper[4727]: I1121 22:14:54.756302 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dgzgc" podStartSLOduration=3.315910298 podStartE2EDuration="6.756268276s" podCreationTimestamp="2025-11-21 22:14:48 +0000 UTC" firstStartedPulling="2025-11-21 22:14:50.65681763 +0000 UTC m=+7695.843002674" lastFinishedPulling="2025-11-21 22:14:54.097175578 +0000 UTC m=+7699.283360652" observedRunningTime="2025-11-21 22:14:54.736310173 +0000 UTC m=+7699.922495267" watchObservedRunningTime="2025-11-21 22:14:54.756268276 +0000 UTC m=+7699.942453360" Nov 21 22:14:59 crc kubenswrapper[4727]: I1121 22:14:59.292307 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:59 crc kubenswrapper[4727]: I1121 22:14:59.292808 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:59 crc kubenswrapper[4727]: I1121 22:14:59.368403 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:59 crc kubenswrapper[4727]: I1121 22:14:59.879133 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:14:59 crc kubenswrapper[4727]: I1121 22:14:59.950217 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dgzgc"] Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.171822 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2"] Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.174911 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.177612 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.182334 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.186423 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2"] Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.205919 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-config-volume\") pod \"collect-profiles-29396055-mbwl2\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.206203 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcvgl\" (UniqueName: \"kubernetes.io/projected/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-kube-api-access-rcvgl\") pod \"collect-profiles-29396055-mbwl2\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.206250 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-secret-volume\") pod \"collect-profiles-29396055-mbwl2\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.309732 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcvgl\" (UniqueName: \"kubernetes.io/projected/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-kube-api-access-rcvgl\") pod \"collect-profiles-29396055-mbwl2\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.309798 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-secret-volume\") pod \"collect-profiles-29396055-mbwl2\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.310014 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-config-volume\") pod \"collect-profiles-29396055-mbwl2\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.313822 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-config-volume\") pod \"collect-profiles-29396055-mbwl2\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.318294 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-secret-volume\") pod \"collect-profiles-29396055-mbwl2\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.342101 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcvgl\" (UniqueName: \"kubernetes.io/projected/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-kube-api-access-rcvgl\") pod \"collect-profiles-29396055-mbwl2\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:00 crc kubenswrapper[4727]: I1121 22:15:00.521687 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:01 crc kubenswrapper[4727]: I1121 22:15:01.034778 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2"] Nov 21 22:15:01 crc kubenswrapper[4727]: I1121 22:15:01.830283 4727 generic.go:334] "Generic (PLEG): container finished" podID="a4750afe-d1b5-4c3f-b364-c4ef096bdc72" containerID="0abaf44f7c7314f1485dd9a5971d5833e0a85cae2cd6787fd3180fa35bbf9e77" exitCode=0 Nov 21 22:15:01 crc kubenswrapper[4727]: I1121 22:15:01.830386 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" event={"ID":"a4750afe-d1b5-4c3f-b364-c4ef096bdc72","Type":"ContainerDied","Data":"0abaf44f7c7314f1485dd9a5971d5833e0a85cae2cd6787fd3180fa35bbf9e77"} Nov 21 22:15:01 crc kubenswrapper[4727]: I1121 22:15:01.830754 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" event={"ID":"a4750afe-d1b5-4c3f-b364-c4ef096bdc72","Type":"ContainerStarted","Data":"c4cef3c6056051a292abdfe9ae36f856a627783e1137cc27867088e75f938e45"} Nov 21 22:15:01 crc kubenswrapper[4727]: I1121 22:15:01.831305 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dgzgc" podUID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerName="registry-server" containerID="cri-o://5a920219646bb768af4081f81b0ba42857d74dffa0e0121cc87c4f89530fd955" gracePeriod=2 Nov 21 22:15:02 crc kubenswrapper[4727]: I1121 22:15:02.843093 4727 generic.go:334] "Generic (PLEG): container finished" podID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerID="5a920219646bb768af4081f81b0ba42857d74dffa0e0121cc87c4f89530fd955" exitCode=0 Nov 21 22:15:02 crc kubenswrapper[4727]: I1121 22:15:02.843978 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgzgc" event={"ID":"0be2eb0c-0e51-4165-955a-0af4dffb05c6","Type":"ContainerDied","Data":"5a920219646bb768af4081f81b0ba42857d74dffa0e0121cc87c4f89530fd955"} Nov 21 22:15:02 crc kubenswrapper[4727]: I1121 22:15:02.844013 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgzgc" event={"ID":"0be2eb0c-0e51-4165-955a-0af4dffb05c6","Type":"ContainerDied","Data":"ac1445c46d08cd1ccb715d09c94d0efc80b8d783289a3d0d21be1135be343080"} Nov 21 22:15:02 crc kubenswrapper[4727]: I1121 22:15:02.844030 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac1445c46d08cd1ccb715d09c94d0efc80b8d783289a3d0d21be1135be343080" Nov 21 22:15:02 crc kubenswrapper[4727]: I1121 22:15:02.921845 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.076012 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-catalog-content\") pod \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.076093 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhqdz\" (UniqueName: \"kubernetes.io/projected/0be2eb0c-0e51-4165-955a-0af4dffb05c6-kube-api-access-mhqdz\") pod \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.076153 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-utilities\") pod \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\" (UID: \"0be2eb0c-0e51-4165-955a-0af4dffb05c6\") " Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.077594 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-utilities" (OuterVolumeSpecName: "utilities") pod "0be2eb0c-0e51-4165-955a-0af4dffb05c6" (UID: "0be2eb0c-0e51-4165-955a-0af4dffb05c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.099412 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be2eb0c-0e51-4165-955a-0af4dffb05c6-kube-api-access-mhqdz" (OuterVolumeSpecName: "kube-api-access-mhqdz") pod "0be2eb0c-0e51-4165-955a-0af4dffb05c6" (UID: "0be2eb0c-0e51-4165-955a-0af4dffb05c6"). InnerVolumeSpecName "kube-api-access-mhqdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.151429 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0be2eb0c-0e51-4165-955a-0af4dffb05c6" (UID: "0be2eb0c-0e51-4165-955a-0af4dffb05c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.179347 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.179377 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhqdz\" (UniqueName: \"kubernetes.io/projected/0be2eb0c-0e51-4165-955a-0af4dffb05c6-kube-api-access-mhqdz\") on node \"crc\" DevicePath \"\"" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.179388 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0be2eb0c-0e51-4165-955a-0af4dffb05c6-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.279872 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.386318 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-config-volume\") pod \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.386383 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-secret-volume\") pod \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.386423 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcvgl\" (UniqueName: \"kubernetes.io/projected/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-kube-api-access-rcvgl\") pod \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\" (UID: \"a4750afe-d1b5-4c3f-b364-c4ef096bdc72\") " Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.390920 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4750afe-d1b5-4c3f-b364-c4ef096bdc72" (UID: "a4750afe-d1b5-4c3f-b364-c4ef096bdc72"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.411221 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4750afe-d1b5-4c3f-b364-c4ef096bdc72" (UID: "a4750afe-d1b5-4c3f-b364-c4ef096bdc72"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.416210 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-kube-api-access-rcvgl" (OuterVolumeSpecName: "kube-api-access-rcvgl") pod "a4750afe-d1b5-4c3f-b364-c4ef096bdc72" (UID: "a4750afe-d1b5-4c3f-b364-c4ef096bdc72"). InnerVolumeSpecName "kube-api-access-rcvgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.489486 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.489542 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.489556 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcvgl\" (UniqueName: \"kubernetes.io/projected/a4750afe-d1b5-4c3f-b364-c4ef096bdc72-kube-api-access-rcvgl\") on node \"crc\" DevicePath \"\"" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.862560 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgzgc" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.862577 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" event={"ID":"a4750afe-d1b5-4c3f-b364-c4ef096bdc72","Type":"ContainerDied","Data":"c4cef3c6056051a292abdfe9ae36f856a627783e1137cc27867088e75f938e45"} Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.862613 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4cef3c6056051a292abdfe9ae36f856a627783e1137cc27867088e75f938e45" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.862630 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396055-mbwl2" Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.939652 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dgzgc"] Nov 21 22:15:03 crc kubenswrapper[4727]: I1121 22:15:03.957767 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dgzgc"] Nov 21 22:15:04 crc kubenswrapper[4727]: I1121 22:15:04.426013 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt"] Nov 21 22:15:04 crc kubenswrapper[4727]: I1121 22:15:04.443564 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396010-gn5dt"] Nov 21 22:15:05 crc kubenswrapper[4727]: I1121 22:15:05.542162 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" path="/var/lib/kubelet/pods/0be2eb0c-0e51-4165-955a-0af4dffb05c6/volumes" Nov 21 22:15:05 crc kubenswrapper[4727]: I1121 22:15:05.547123 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33167c55-388a-4b7c-91a4-5284c3bf991d" path="/var/lib/kubelet/pods/33167c55-388a-4b7c-91a4-5284c3bf991d/volumes" Nov 21 22:15:43 crc kubenswrapper[4727]: I1121 22:15:43.335384 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:15:43 crc kubenswrapper[4727]: I1121 22:15:43.335953 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:16:00 crc kubenswrapper[4727]: I1121 22:16:00.088315 4727 scope.go:117] "RemoveContainer" containerID="4a9a6b07f21fb06b7fc896d1a70bf9b25fdb0e2651e71c109256f6327df13d54" Nov 21 22:16:13 crc kubenswrapper[4727]: I1121 22:16:13.335786 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:16:13 crc kubenswrapper[4727]: I1121 22:16:13.336414 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:16:43 crc kubenswrapper[4727]: I1121 22:16:43.335705 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:16:43 crc kubenswrapper[4727]: I1121 22:16:43.336527 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:16:43 crc kubenswrapper[4727]: I1121 22:16:43.336599 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 22:16:43 crc kubenswrapper[4727]: I1121 22:16:43.338215 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 22:16:43 crc kubenswrapper[4727]: I1121 22:16:43.338329 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" gracePeriod=600 Nov 21 22:16:43 crc kubenswrapper[4727]: E1121 22:16:43.468038 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:16:44 crc kubenswrapper[4727]: I1121 22:16:44.187073 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" exitCode=0 Nov 21 22:16:44 crc kubenswrapper[4727]: I1121 22:16:44.187168 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc"} Nov 21 22:16:44 crc kubenswrapper[4727]: I1121 22:16:44.187415 4727 scope.go:117] "RemoveContainer" containerID="b4a3f569da27b516215f907523b106a29bef051168fc2c26df382fff2b9c5db7" Nov 21 22:16:44 crc kubenswrapper[4727]: I1121 22:16:44.188652 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:16:44 crc kubenswrapper[4727]: E1121 22:16:44.189419 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:16:59 crc kubenswrapper[4727]: I1121 22:16:59.499723 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:16:59 crc kubenswrapper[4727]: E1121 22:16:59.500598 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:17:12 crc kubenswrapper[4727]: I1121 22:17:12.499734 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:17:12 crc kubenswrapper[4727]: E1121 22:17:12.500618 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:17:24 crc kubenswrapper[4727]: I1121 22:17:24.499340 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:17:24 crc kubenswrapper[4727]: E1121 22:17:24.500410 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:17:37 crc kubenswrapper[4727]: I1121 22:17:37.499483 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:17:37 crc kubenswrapper[4727]: E1121 22:17:37.500992 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:17:48 crc kubenswrapper[4727]: I1121 22:17:48.500730 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:17:48 crc kubenswrapper[4727]: E1121 22:17:48.501792 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:17:59 crc kubenswrapper[4727]: I1121 22:17:59.501657 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:17:59 crc kubenswrapper[4727]: E1121 22:17:59.503477 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:18:13 crc kubenswrapper[4727]: I1121 22:18:13.500370 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:18:13 crc kubenswrapper[4727]: E1121 22:18:13.502003 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:18:27 crc kubenswrapper[4727]: I1121 22:18:27.499976 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:18:27 crc kubenswrapper[4727]: E1121 22:18:27.501040 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:18:38 crc kubenswrapper[4727]: I1121 22:18:38.500428 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:18:38 crc kubenswrapper[4727]: E1121 22:18:38.501478 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:18:52 crc kubenswrapper[4727]: I1121 22:18:52.502296 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:18:52 crc kubenswrapper[4727]: E1121 22:18:52.503016 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:19:03 crc kubenswrapper[4727]: I1121 22:19:03.499819 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:19:03 crc kubenswrapper[4727]: E1121 22:19:03.500710 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:19:17 crc kubenswrapper[4727]: I1121 22:19:17.500309 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:19:17 crc kubenswrapper[4727]: E1121 22:19:17.501533 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:19:28 crc kubenswrapper[4727]: I1121 22:19:28.500453 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:19:28 crc kubenswrapper[4727]: E1121 22:19:28.501470 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:19:39 crc kubenswrapper[4727]: I1121 22:19:39.499760 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:19:39 crc kubenswrapper[4727]: E1121 22:19:39.500576 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:19:54 crc kubenswrapper[4727]: I1121 22:19:54.500168 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:19:54 crc kubenswrapper[4727]: E1121 22:19:54.501347 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:20:05 crc kubenswrapper[4727]: I1121 22:20:05.513154 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:20:05 crc kubenswrapper[4727]: E1121 22:20:05.513998 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.589995 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h6fbg"] Nov 21 22:20:09 crc kubenswrapper[4727]: E1121 22:20:09.590930 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerName="registry-server" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.590943 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerName="registry-server" Nov 21 22:20:09 crc kubenswrapper[4727]: E1121 22:20:09.591009 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerName="extract-content" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.591015 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerName="extract-content" Nov 21 22:20:09 crc kubenswrapper[4727]: E1121 22:20:09.591028 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4750afe-d1b5-4c3f-b364-c4ef096bdc72" containerName="collect-profiles" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.591034 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4750afe-d1b5-4c3f-b364-c4ef096bdc72" containerName="collect-profiles" Nov 21 22:20:09 crc kubenswrapper[4727]: E1121 22:20:09.591047 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerName="extract-utilities" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.591052 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerName="extract-utilities" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.591353 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4750afe-d1b5-4c3f-b364-c4ef096bdc72" containerName="collect-profiles" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.591366 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be2eb0c-0e51-4165-955a-0af4dffb05c6" containerName="registry-server" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.593200 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.615927 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h6fbg"] Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.648697 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-utilities\") pod \"redhat-marketplace-h6fbg\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.648753 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-catalog-content\") pod \"redhat-marketplace-h6fbg\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.648930 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znk8s\" (UniqueName: \"kubernetes.io/projected/82dc884d-2fa4-45d8-b505-66bce64d129f-kube-api-access-znk8s\") pod \"redhat-marketplace-h6fbg\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.753700 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-utilities\") pod \"redhat-marketplace-h6fbg\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.753787 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-catalog-content\") pod \"redhat-marketplace-h6fbg\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.753924 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znk8s\" (UniqueName: \"kubernetes.io/projected/82dc884d-2fa4-45d8-b505-66bce64d129f-kube-api-access-znk8s\") pod \"redhat-marketplace-h6fbg\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.754139 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-utilities\") pod \"redhat-marketplace-h6fbg\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.754406 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-catalog-content\") pod \"redhat-marketplace-h6fbg\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.778777 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znk8s\" (UniqueName: \"kubernetes.io/projected/82dc884d-2fa4-45d8-b505-66bce64d129f-kube-api-access-znk8s\") pod \"redhat-marketplace-h6fbg\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:09 crc kubenswrapper[4727]: I1121 22:20:09.930511 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:10 crc kubenswrapper[4727]: I1121 22:20:10.452761 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h6fbg"] Nov 21 22:20:10 crc kubenswrapper[4727]: W1121 22:20:10.456287 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82dc884d_2fa4_45d8_b505_66bce64d129f.slice/crio-a43853f47d77d1d2e62cc57922650d0a5f217e7a0c2bdcdb029ca4962d97d8be WatchSource:0}: Error finding container a43853f47d77d1d2e62cc57922650d0a5f217e7a0c2bdcdb029ca4962d97d8be: Status 404 returned error can't find the container with id a43853f47d77d1d2e62cc57922650d0a5f217e7a0c2bdcdb029ca4962d97d8be Nov 21 22:20:11 crc kubenswrapper[4727]: I1121 22:20:11.059569 4727 generic.go:334] "Generic (PLEG): container finished" podID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerID="064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814" exitCode=0 Nov 21 22:20:11 crc kubenswrapper[4727]: I1121 22:20:11.059654 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h6fbg" event={"ID":"82dc884d-2fa4-45d8-b505-66bce64d129f","Type":"ContainerDied","Data":"064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814"} Nov 21 22:20:11 crc kubenswrapper[4727]: I1121 22:20:11.059863 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h6fbg" event={"ID":"82dc884d-2fa4-45d8-b505-66bce64d129f","Type":"ContainerStarted","Data":"a43853f47d77d1d2e62cc57922650d0a5f217e7a0c2bdcdb029ca4962d97d8be"} Nov 21 22:20:11 crc kubenswrapper[4727]: I1121 22:20:11.062549 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 22:20:12 crc kubenswrapper[4727]: I1121 22:20:12.073337 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h6fbg" event={"ID":"82dc884d-2fa4-45d8-b505-66bce64d129f","Type":"ContainerStarted","Data":"c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514"} Nov 21 22:20:14 crc kubenswrapper[4727]: I1121 22:20:14.119461 4727 generic.go:334] "Generic (PLEG): container finished" podID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerID="c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514" exitCode=0 Nov 21 22:20:14 crc kubenswrapper[4727]: I1121 22:20:14.120042 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h6fbg" event={"ID":"82dc884d-2fa4-45d8-b505-66bce64d129f","Type":"ContainerDied","Data":"c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514"} Nov 21 22:20:15 crc kubenswrapper[4727]: I1121 22:20:15.137787 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h6fbg" event={"ID":"82dc884d-2fa4-45d8-b505-66bce64d129f","Type":"ContainerStarted","Data":"6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5"} Nov 21 22:20:15 crc kubenswrapper[4727]: I1121 22:20:15.179207 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h6fbg" podStartSLOduration=2.658628679 podStartE2EDuration="6.179175266s" podCreationTimestamp="2025-11-21 22:20:09 +0000 UTC" firstStartedPulling="2025-11-21 22:20:11.062277774 +0000 UTC m=+8016.248462828" lastFinishedPulling="2025-11-21 22:20:14.582824381 +0000 UTC m=+8019.769009415" observedRunningTime="2025-11-21 22:20:15.159783459 +0000 UTC m=+8020.345968533" watchObservedRunningTime="2025-11-21 22:20:15.179175266 +0000 UTC m=+8020.365360340" Nov 21 22:20:16 crc kubenswrapper[4727]: I1121 22:20:16.499770 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:20:16 crc kubenswrapper[4727]: E1121 22:20:16.502033 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:20:19 crc kubenswrapper[4727]: I1121 22:20:19.931142 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:19 crc kubenswrapper[4727]: I1121 22:20:19.931620 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:20 crc kubenswrapper[4727]: I1121 22:20:20.020849 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:20 crc kubenswrapper[4727]: I1121 22:20:20.275261 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:20 crc kubenswrapper[4727]: I1121 22:20:20.340508 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h6fbg"] Nov 21 22:20:22 crc kubenswrapper[4727]: I1121 22:20:22.228641 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h6fbg" podUID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerName="registry-server" containerID="cri-o://6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5" gracePeriod=2 Nov 21 22:20:22 crc kubenswrapper[4727]: I1121 22:20:22.935479 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.013448 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-utilities\") pod \"82dc884d-2fa4-45d8-b505-66bce64d129f\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.013501 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-catalog-content\") pod \"82dc884d-2fa4-45d8-b505-66bce64d129f\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.013763 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znk8s\" (UniqueName: \"kubernetes.io/projected/82dc884d-2fa4-45d8-b505-66bce64d129f-kube-api-access-znk8s\") pod \"82dc884d-2fa4-45d8-b505-66bce64d129f\" (UID: \"82dc884d-2fa4-45d8-b505-66bce64d129f\") " Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.015357 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-utilities" (OuterVolumeSpecName: "utilities") pod "82dc884d-2fa4-45d8-b505-66bce64d129f" (UID: "82dc884d-2fa4-45d8-b505-66bce64d129f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.022952 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82dc884d-2fa4-45d8-b505-66bce64d129f-kube-api-access-znk8s" (OuterVolumeSpecName: "kube-api-access-znk8s") pod "82dc884d-2fa4-45d8-b505-66bce64d129f" (UID: "82dc884d-2fa4-45d8-b505-66bce64d129f"). InnerVolumeSpecName "kube-api-access-znk8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.037287 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82dc884d-2fa4-45d8-b505-66bce64d129f" (UID: "82dc884d-2fa4-45d8-b505-66bce64d129f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.116643 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.116701 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znk8s\" (UniqueName: \"kubernetes.io/projected/82dc884d-2fa4-45d8-b505-66bce64d129f-kube-api-access-znk8s\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.116727 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82dc884d-2fa4-45d8-b505-66bce64d129f-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.242320 4727 generic.go:334] "Generic (PLEG): container finished" podID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerID="6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5" exitCode=0 Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.242380 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h6fbg" event={"ID":"82dc884d-2fa4-45d8-b505-66bce64d129f","Type":"ContainerDied","Data":"6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5"} Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.242417 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h6fbg" event={"ID":"82dc884d-2fa4-45d8-b505-66bce64d129f","Type":"ContainerDied","Data":"a43853f47d77d1d2e62cc57922650d0a5f217e7a0c2bdcdb029ca4962d97d8be"} Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.242445 4727 scope.go:117] "RemoveContainer" containerID="6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.242645 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h6fbg" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.269252 4727 scope.go:117] "RemoveContainer" containerID="c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.290909 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h6fbg"] Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.304105 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h6fbg"] Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.305046 4727 scope.go:117] "RemoveContainer" containerID="064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.385824 4727 scope.go:117] "RemoveContainer" containerID="6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5" Nov 21 22:20:23 crc kubenswrapper[4727]: E1121 22:20:23.386575 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5\": container with ID starting with 6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5 not found: ID does not exist" containerID="6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.386621 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5"} err="failed to get container status \"6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5\": rpc error: code = NotFound desc = could not find container \"6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5\": container with ID starting with 6dfc96f877205500f2d2099b64e4e48507b1c7845e889114bc6c0147a2c91bf5 not found: ID does not exist" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.386649 4727 scope.go:117] "RemoveContainer" containerID="c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514" Nov 21 22:20:23 crc kubenswrapper[4727]: E1121 22:20:23.387229 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514\": container with ID starting with c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514 not found: ID does not exist" containerID="c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.387253 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514"} err="failed to get container status \"c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514\": rpc error: code = NotFound desc = could not find container \"c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514\": container with ID starting with c07f3558a3cefe8ea4264264ff55041d8664bd041a6b795efd3d242aa72b3514 not found: ID does not exist" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.387270 4727 scope.go:117] "RemoveContainer" containerID="064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814" Nov 21 22:20:23 crc kubenswrapper[4727]: E1121 22:20:23.387499 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814\": container with ID starting with 064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814 not found: ID does not exist" containerID="064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.387526 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814"} err="failed to get container status \"064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814\": rpc error: code = NotFound desc = could not find container \"064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814\": container with ID starting with 064acf991cc23f1f219994036cbbbf4216e3a92794229503399be59663757814 not found: ID does not exist" Nov 21 22:20:23 crc kubenswrapper[4727]: I1121 22:20:23.519782 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82dc884d-2fa4-45d8-b505-66bce64d129f" path="/var/lib/kubelet/pods/82dc884d-2fa4-45d8-b505-66bce64d129f/volumes" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.483345 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-phh8g"] Nov 21 22:20:27 crc kubenswrapper[4727]: E1121 22:20:27.484350 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerName="registry-server" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.484366 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerName="registry-server" Nov 21 22:20:27 crc kubenswrapper[4727]: E1121 22:20:27.484380 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerName="extract-utilities" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.484391 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerName="extract-utilities" Nov 21 22:20:27 crc kubenswrapper[4727]: E1121 22:20:27.484447 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerName="extract-content" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.484455 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerName="extract-content" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.484715 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="82dc884d-2fa4-45d8-b505-66bce64d129f" containerName="registry-server" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.486653 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.543767 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phh8g"] Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.563228 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-catalog-content\") pod \"certified-operators-phh8g\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.563606 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7chz\" (UniqueName: \"kubernetes.io/projected/9de74009-5565-4904-aec3-22cc21444569-kube-api-access-x7chz\") pod \"certified-operators-phh8g\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.566822 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-utilities\") pod \"certified-operators-phh8g\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.669625 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-utilities\") pod \"certified-operators-phh8g\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.669745 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-catalog-content\") pod \"certified-operators-phh8g\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.669805 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7chz\" (UniqueName: \"kubernetes.io/projected/9de74009-5565-4904-aec3-22cc21444569-kube-api-access-x7chz\") pod \"certified-operators-phh8g\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.670620 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-utilities\") pod \"certified-operators-phh8g\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.671041 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-catalog-content\") pod \"certified-operators-phh8g\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.688350 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7chz\" (UniqueName: \"kubernetes.io/projected/9de74009-5565-4904-aec3-22cc21444569-kube-api-access-x7chz\") pod \"certified-operators-phh8g\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:27 crc kubenswrapper[4727]: I1121 22:20:27.820562 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:28 crc kubenswrapper[4727]: I1121 22:20:28.389121 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phh8g"] Nov 21 22:20:29 crc kubenswrapper[4727]: I1121 22:20:29.329764 4727 generic.go:334] "Generic (PLEG): container finished" podID="9de74009-5565-4904-aec3-22cc21444569" containerID="8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b" exitCode=0 Nov 21 22:20:29 crc kubenswrapper[4727]: I1121 22:20:29.329840 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phh8g" event={"ID":"9de74009-5565-4904-aec3-22cc21444569","Type":"ContainerDied","Data":"8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b"} Nov 21 22:20:29 crc kubenswrapper[4727]: I1121 22:20:29.330053 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phh8g" event={"ID":"9de74009-5565-4904-aec3-22cc21444569","Type":"ContainerStarted","Data":"24e959854b8f8e9f47fe0ae0ad822ff4023da171ed4805ff95df5395a0d6b0cd"} Nov 21 22:20:29 crc kubenswrapper[4727]: I1121 22:20:29.500417 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:20:29 crc kubenswrapper[4727]: E1121 22:20:29.500904 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:20:29 crc kubenswrapper[4727]: I1121 22:20:29.883884 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g9v4p"] Nov 21 22:20:29 crc kubenswrapper[4727]: I1121 22:20:29.886869 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:29 crc kubenswrapper[4727]: I1121 22:20:29.908748 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9v4p"] Nov 21 22:20:29 crc kubenswrapper[4727]: I1121 22:20:29.933467 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-catalog-content\") pod \"redhat-operators-g9v4p\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:29 crc kubenswrapper[4727]: I1121 22:20:29.933550 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-utilities\") pod \"redhat-operators-g9v4p\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:29 crc kubenswrapper[4727]: I1121 22:20:29.933734 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6w5\" (UniqueName: \"kubernetes.io/projected/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-kube-api-access-rf6w5\") pod \"redhat-operators-g9v4p\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:30 crc kubenswrapper[4727]: I1121 22:20:30.036808 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-catalog-content\") pod \"redhat-operators-g9v4p\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:30 crc kubenswrapper[4727]: I1121 22:20:30.037470 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-utilities\") pod \"redhat-operators-g9v4p\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:30 crc kubenswrapper[4727]: I1121 22:20:30.037512 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-catalog-content\") pod \"redhat-operators-g9v4p\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:30 crc kubenswrapper[4727]: I1121 22:20:30.037713 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf6w5\" (UniqueName: \"kubernetes.io/projected/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-kube-api-access-rf6w5\") pod \"redhat-operators-g9v4p\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:30 crc kubenswrapper[4727]: I1121 22:20:30.038072 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-utilities\") pod \"redhat-operators-g9v4p\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:30 crc kubenswrapper[4727]: I1121 22:20:30.060874 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf6w5\" (UniqueName: \"kubernetes.io/projected/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-kube-api-access-rf6w5\") pod \"redhat-operators-g9v4p\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:30 crc kubenswrapper[4727]: I1121 22:20:30.207262 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:31 crc kubenswrapper[4727]: I1121 22:20:31.142368 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g9v4p"] Nov 21 22:20:31 crc kubenswrapper[4727]: I1121 22:20:31.351687 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9v4p" event={"ID":"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb","Type":"ContainerStarted","Data":"3b80c2489ee615361df9cced99628a195abb6d2ff3c872808de0983185e9a16d"} Nov 21 22:20:31 crc kubenswrapper[4727]: I1121 22:20:31.354703 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phh8g" event={"ID":"9de74009-5565-4904-aec3-22cc21444569","Type":"ContainerStarted","Data":"7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6"} Nov 21 22:20:32 crc kubenswrapper[4727]: I1121 22:20:32.368649 4727 generic.go:334] "Generic (PLEG): container finished" podID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerID="b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99" exitCode=0 Nov 21 22:20:32 crc kubenswrapper[4727]: I1121 22:20:32.368731 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9v4p" event={"ID":"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb","Type":"ContainerDied","Data":"b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99"} Nov 21 22:20:33 crc kubenswrapper[4727]: I1121 22:20:33.387209 4727 generic.go:334] "Generic (PLEG): container finished" podID="9de74009-5565-4904-aec3-22cc21444569" containerID="7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6" exitCode=0 Nov 21 22:20:33 crc kubenswrapper[4727]: I1121 22:20:33.387626 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phh8g" event={"ID":"9de74009-5565-4904-aec3-22cc21444569","Type":"ContainerDied","Data":"7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6"} Nov 21 22:20:34 crc kubenswrapper[4727]: I1121 22:20:34.403248 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9v4p" event={"ID":"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb","Type":"ContainerStarted","Data":"7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae"} Nov 21 22:20:34 crc kubenswrapper[4727]: I1121 22:20:34.408379 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phh8g" event={"ID":"9de74009-5565-4904-aec3-22cc21444569","Type":"ContainerStarted","Data":"b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572"} Nov 21 22:20:34 crc kubenswrapper[4727]: I1121 22:20:34.471672 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-phh8g" podStartSLOduration=2.940434636 podStartE2EDuration="7.471651916s" podCreationTimestamp="2025-11-21 22:20:27 +0000 UTC" firstStartedPulling="2025-11-21 22:20:29.332409039 +0000 UTC m=+8034.518594083" lastFinishedPulling="2025-11-21 22:20:33.863626279 +0000 UTC m=+8039.049811363" observedRunningTime="2025-11-21 22:20:34.45686151 +0000 UTC m=+8039.643046564" watchObservedRunningTime="2025-11-21 22:20:34.471651916 +0000 UTC m=+8039.657836970" Nov 21 22:20:37 crc kubenswrapper[4727]: I1121 22:20:37.820774 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:37 crc kubenswrapper[4727]: I1121 22:20:37.823390 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:38 crc kubenswrapper[4727]: I1121 22:20:38.881536 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-phh8g" podUID="9de74009-5565-4904-aec3-22cc21444569" containerName="registry-server" probeResult="failure" output=< Nov 21 22:20:38 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:20:38 crc kubenswrapper[4727]: > Nov 21 22:20:39 crc kubenswrapper[4727]: I1121 22:20:39.489481 4727 generic.go:334] "Generic (PLEG): container finished" podID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerID="7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae" exitCode=0 Nov 21 22:20:39 crc kubenswrapper[4727]: I1121 22:20:39.489610 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9v4p" event={"ID":"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb","Type":"ContainerDied","Data":"7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae"} Nov 21 22:20:40 crc kubenswrapper[4727]: I1121 22:20:40.524331 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9v4p" event={"ID":"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb","Type":"ContainerStarted","Data":"35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54"} Nov 21 22:20:40 crc kubenswrapper[4727]: I1121 22:20:40.557701 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g9v4p" podStartSLOduration=4.025960906 podStartE2EDuration="11.557660717s" podCreationTimestamp="2025-11-21 22:20:29 +0000 UTC" firstStartedPulling="2025-11-21 22:20:32.371420307 +0000 UTC m=+8037.557605351" lastFinishedPulling="2025-11-21 22:20:39.903120108 +0000 UTC m=+8045.089305162" observedRunningTime="2025-11-21 22:20:40.548984308 +0000 UTC m=+8045.735169352" watchObservedRunningTime="2025-11-21 22:20:40.557660717 +0000 UTC m=+8045.743845761" Nov 21 22:20:41 crc kubenswrapper[4727]: I1121 22:20:41.499602 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:20:41 crc kubenswrapper[4727]: E1121 22:20:41.500581 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:20:47 crc kubenswrapper[4727]: I1121 22:20:47.903656 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:47 crc kubenswrapper[4727]: I1121 22:20:47.978822 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:48 crc kubenswrapper[4727]: I1121 22:20:48.150487 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phh8g"] Nov 21 22:20:49 crc kubenswrapper[4727]: I1121 22:20:49.655476 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-phh8g" podUID="9de74009-5565-4904-aec3-22cc21444569" containerName="registry-server" containerID="cri-o://b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572" gracePeriod=2 Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.208337 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.208652 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.247159 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.365215 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-utilities\") pod \"9de74009-5565-4904-aec3-22cc21444569\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.365806 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7chz\" (UniqueName: \"kubernetes.io/projected/9de74009-5565-4904-aec3-22cc21444569-kube-api-access-x7chz\") pod \"9de74009-5565-4904-aec3-22cc21444569\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.365861 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-catalog-content\") pod \"9de74009-5565-4904-aec3-22cc21444569\" (UID: \"9de74009-5565-4904-aec3-22cc21444569\") " Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.366451 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-utilities" (OuterVolumeSpecName: "utilities") pod "9de74009-5565-4904-aec3-22cc21444569" (UID: "9de74009-5565-4904-aec3-22cc21444569"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.373590 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9de74009-5565-4904-aec3-22cc21444569-kube-api-access-x7chz" (OuterVolumeSpecName: "kube-api-access-x7chz") pod "9de74009-5565-4904-aec3-22cc21444569" (UID: "9de74009-5565-4904-aec3-22cc21444569"). InnerVolumeSpecName "kube-api-access-x7chz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.415578 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9de74009-5565-4904-aec3-22cc21444569" (UID: "9de74009-5565-4904-aec3-22cc21444569"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.468813 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7chz\" (UniqueName: \"kubernetes.io/projected/9de74009-5565-4904-aec3-22cc21444569-kube-api-access-x7chz\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.468860 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.468877 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9de74009-5565-4904-aec3-22cc21444569-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.671744 4727 generic.go:334] "Generic (PLEG): container finished" podID="9de74009-5565-4904-aec3-22cc21444569" containerID="b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572" exitCode=0 Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.671797 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phh8g" event={"ID":"9de74009-5565-4904-aec3-22cc21444569","Type":"ContainerDied","Data":"b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572"} Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.671830 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phh8g" event={"ID":"9de74009-5565-4904-aec3-22cc21444569","Type":"ContainerDied","Data":"24e959854b8f8e9f47fe0ae0ad822ff4023da171ed4805ff95df5395a0d6b0cd"} Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.671833 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phh8g" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.671849 4727 scope.go:117] "RemoveContainer" containerID="b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.735348 4727 scope.go:117] "RemoveContainer" containerID="7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.741100 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phh8g"] Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.763161 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-phh8g"] Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.766116 4727 scope.go:117] "RemoveContainer" containerID="8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.822799 4727 scope.go:117] "RemoveContainer" containerID="b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572" Nov 21 22:20:50 crc kubenswrapper[4727]: E1121 22:20:50.823377 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572\": container with ID starting with b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572 not found: ID does not exist" containerID="b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.823419 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572"} err="failed to get container status \"b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572\": rpc error: code = NotFound desc = could not find container \"b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572\": container with ID starting with b6dcfa76b48f1ea9e9f48a8a42287c2c918613c595e9c4ab03f59da76dc1f572 not found: ID does not exist" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.823447 4727 scope.go:117] "RemoveContainer" containerID="7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6" Nov 21 22:20:50 crc kubenswrapper[4727]: E1121 22:20:50.823838 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6\": container with ID starting with 7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6 not found: ID does not exist" containerID="7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.823893 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6"} err="failed to get container status \"7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6\": rpc error: code = NotFound desc = could not find container \"7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6\": container with ID starting with 7a27605701289884452b0d24f396c574fd312d27fa0dc1782acd074e5820e3c6 not found: ID does not exist" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.823979 4727 scope.go:117] "RemoveContainer" containerID="8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b" Nov 21 22:20:50 crc kubenswrapper[4727]: E1121 22:20:50.826403 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b\": container with ID starting with 8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b not found: ID does not exist" containerID="8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b" Nov 21 22:20:50 crc kubenswrapper[4727]: I1121 22:20:50.826443 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b"} err="failed to get container status \"8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b\": rpc error: code = NotFound desc = could not find container \"8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b\": container with ID starting with 8ee8b920e8b7d6cf8caee1575700e2d0233e50776139ac38e2d66b476156d06b not found: ID does not exist" Nov 21 22:20:51 crc kubenswrapper[4727]: I1121 22:20:51.270485 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g9v4p" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerName="registry-server" probeResult="failure" output=< Nov 21 22:20:51 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:20:51 crc kubenswrapper[4727]: > Nov 21 22:20:51 crc kubenswrapper[4727]: I1121 22:20:51.524289 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9de74009-5565-4904-aec3-22cc21444569" path="/var/lib/kubelet/pods/9de74009-5565-4904-aec3-22cc21444569/volumes" Nov 21 22:20:51 crc kubenswrapper[4727]: I1121 22:20:51.686319 4727 generic.go:334] "Generic (PLEG): container finished" podID="9523e234-9365-483e-8548-43a9c312692e" containerID="8691e7ee08979a8661beb3ad66c5d9cc4bb269ff004e8374673963321a32e12d" exitCode=1 Nov 21 22:20:51 crc kubenswrapper[4727]: I1121 22:20:51.686397 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9523e234-9365-483e-8548-43a9c312692e","Type":"ContainerDied","Data":"8691e7ee08979a8661beb3ad66c5d9cc4bb269ff004e8374673963321a32e12d"} Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.229136 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.356349 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-config-data\") pod \"9523e234-9365-483e-8548-43a9c312692e\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.356889 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-openstack-config-secret\") pod \"9523e234-9365-483e-8548-43a9c312692e\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.356919 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"9523e234-9365-483e-8548-43a9c312692e\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.357142 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ca-certs\") pod \"9523e234-9365-483e-8548-43a9c312692e\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.357223 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-workdir\") pod \"9523e234-9365-483e-8548-43a9c312692e\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.357418 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-openstack-config\") pod \"9523e234-9365-483e-8548-43a9c312692e\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.357583 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92rg5\" (UniqueName: \"kubernetes.io/projected/9523e234-9365-483e-8548-43a9c312692e-kube-api-access-92rg5\") pod \"9523e234-9365-483e-8548-43a9c312692e\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.357626 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-temporary\") pod \"9523e234-9365-483e-8548-43a9c312692e\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.357693 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ssh-key\") pod \"9523e234-9365-483e-8548-43a9c312692e\" (UID: \"9523e234-9365-483e-8548-43a9c312692e\") " Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.357875 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-config-data" (OuterVolumeSpecName: "config-data") pod "9523e234-9365-483e-8548-43a9c312692e" (UID: "9523e234-9365-483e-8548-43a9c312692e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.359350 4727 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.361482 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9523e234-9365-483e-8548-43a9c312692e" (UID: "9523e234-9365-483e-8548-43a9c312692e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.366222 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9523e234-9365-483e-8548-43a9c312692e" (UID: "9523e234-9365-483e-8548-43a9c312692e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.366719 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9523e234-9365-483e-8548-43a9c312692e" (UID: "9523e234-9365-483e-8548-43a9c312692e"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.366848 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9523e234-9365-483e-8548-43a9c312692e-kube-api-access-92rg5" (OuterVolumeSpecName: "kube-api-access-92rg5") pod "9523e234-9365-483e-8548-43a9c312692e" (UID: "9523e234-9365-483e-8548-43a9c312692e"). InnerVolumeSpecName "kube-api-access-92rg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.392738 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9523e234-9365-483e-8548-43a9c312692e" (UID: "9523e234-9365-483e-8548-43a9c312692e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.395411 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9523e234-9365-483e-8548-43a9c312692e" (UID: "9523e234-9365-483e-8548-43a9c312692e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.398325 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9523e234-9365-483e-8548-43a9c312692e" (UID: "9523e234-9365-483e-8548-43a9c312692e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.417213 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9523e234-9365-483e-8548-43a9c312692e" (UID: "9523e234-9365-483e-8548-43a9c312692e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.461361 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9523e234-9365-483e-8548-43a9c312692e-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.461397 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92rg5\" (UniqueName: \"kubernetes.io/projected/9523e234-9365-483e-8548-43a9c312692e-kube-api-access-92rg5\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.461409 4727 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.461418 4727 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.461428 4727 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.462036 4727 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.462057 4727 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9523e234-9365-483e-8548-43a9c312692e-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.462067 4727 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9523e234-9365-483e-8548-43a9c312692e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.486695 4727 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.565263 4727 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.718379 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9523e234-9365-483e-8548-43a9c312692e","Type":"ContainerDied","Data":"a36cef4edf466319cf2c52f9eec405f5ec742711ef8f0f9ae9b168184c8d2d3f"} Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.718745 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a36cef4edf466319cf2c52f9eec405f5ec742711ef8f0f9ae9b168184c8d2d3f" Nov 21 22:20:53 crc kubenswrapper[4727]: I1121 22:20:53.718556 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 22:20:56 crc kubenswrapper[4727]: I1121 22:20:56.498782 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:20:56 crc kubenswrapper[4727]: E1121 22:20:56.499439 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.284061 4727 scope.go:117] "RemoveContainer" containerID="dcb24d4707d5f0111b587af1ca7075f53868b4b7a1a3b1e4ffc5dc2839946e31" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.314363 4727 scope.go:117] "RemoveContainer" containerID="5a920219646bb768af4081f81b0ba42857d74dffa0e0121cc87c4f89530fd955" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.382299 4727 scope.go:117] "RemoveContainer" containerID="008976cbb47449bec69ef2239150f045fbab2e18796b15af69278cf7ab824cf4" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.980353 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 21 22:21:00 crc kubenswrapper[4727]: E1121 22:21:00.981380 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9de74009-5565-4904-aec3-22cc21444569" containerName="registry-server" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.981426 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9de74009-5565-4904-aec3-22cc21444569" containerName="registry-server" Nov 21 22:21:00 crc kubenswrapper[4727]: E1121 22:21:00.981502 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9de74009-5565-4904-aec3-22cc21444569" containerName="extract-utilities" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.981523 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9de74009-5565-4904-aec3-22cc21444569" containerName="extract-utilities" Nov 21 22:21:00 crc kubenswrapper[4727]: E1121 22:21:00.981561 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9523e234-9365-483e-8548-43a9c312692e" containerName="tempest-tests-tempest-tests-runner" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.981581 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9523e234-9365-483e-8548-43a9c312692e" containerName="tempest-tests-tempest-tests-runner" Nov 21 22:21:00 crc kubenswrapper[4727]: E1121 22:21:00.981651 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9de74009-5565-4904-aec3-22cc21444569" containerName="extract-content" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.981671 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="9de74009-5565-4904-aec3-22cc21444569" containerName="extract-content" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.982287 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9de74009-5565-4904-aec3-22cc21444569" containerName="registry-server" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.982354 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9523e234-9365-483e-8548-43a9c312692e" containerName="tempest-tests-tempest-tests-runner" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.984276 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.988597 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vbccx" Nov 21 22:21:00 crc kubenswrapper[4727]: I1121 22:21:00.991671 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 21 22:21:01 crc kubenswrapper[4727]: I1121 22:21:01.066398 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lds5g\" (UniqueName: \"kubernetes.io/projected/b32af989-be86-463d-84c6-55a06a44b77d-kube-api-access-lds5g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b32af989-be86-463d-84c6-55a06a44b77d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 22:21:01 crc kubenswrapper[4727]: I1121 22:21:01.066562 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b32af989-be86-463d-84c6-55a06a44b77d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 22:21:01 crc kubenswrapper[4727]: I1121 22:21:01.168590 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b32af989-be86-463d-84c6-55a06a44b77d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 22:21:01 crc kubenswrapper[4727]: I1121 22:21:01.168709 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lds5g\" (UniqueName: \"kubernetes.io/projected/b32af989-be86-463d-84c6-55a06a44b77d-kube-api-access-lds5g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b32af989-be86-463d-84c6-55a06a44b77d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 22:21:01 crc kubenswrapper[4727]: I1121 22:21:01.170156 4727 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b32af989-be86-463d-84c6-55a06a44b77d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 22:21:01 crc kubenswrapper[4727]: I1121 22:21:01.192270 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lds5g\" (UniqueName: \"kubernetes.io/projected/b32af989-be86-463d-84c6-55a06a44b77d-kube-api-access-lds5g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b32af989-be86-463d-84c6-55a06a44b77d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 22:21:01 crc kubenswrapper[4727]: I1121 22:21:01.226102 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b32af989-be86-463d-84c6-55a06a44b77d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 22:21:01 crc kubenswrapper[4727]: I1121 22:21:01.301295 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g9v4p" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerName="registry-server" probeResult="failure" output=< Nov 21 22:21:01 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:21:01 crc kubenswrapper[4727]: > Nov 21 22:21:01 crc kubenswrapper[4727]: I1121 22:21:01.314072 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 22:21:01 crc kubenswrapper[4727]: I1121 22:21:01.920129 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 21 22:21:02 crc kubenswrapper[4727]: I1121 22:21:02.848320 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b32af989-be86-463d-84c6-55a06a44b77d","Type":"ContainerStarted","Data":"d17dbda2c4911fc019a9eb03fda9595f30cb550c330ed991e99a7f3be16ce70c"} Nov 21 22:21:03 crc kubenswrapper[4727]: I1121 22:21:03.874516 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b32af989-be86-463d-84c6-55a06a44b77d","Type":"ContainerStarted","Data":"29a222fb75efecc177b0b038838c8b35e09b0ffecfd0607d0c9ddac30642823e"} Nov 21 22:21:03 crc kubenswrapper[4727]: I1121 22:21:03.889927 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.719873828 podStartE2EDuration="3.889908972s" podCreationTimestamp="2025-11-21 22:21:00 +0000 UTC" firstStartedPulling="2025-11-21 22:21:02.007332561 +0000 UTC m=+8067.193517605" lastFinishedPulling="2025-11-21 22:21:03.177367715 +0000 UTC m=+8068.363552749" observedRunningTime="2025-11-21 22:21:03.887043653 +0000 UTC m=+8069.073228697" watchObservedRunningTime="2025-11-21 22:21:03.889908972 +0000 UTC m=+8069.076094026" Nov 21 22:21:08 crc kubenswrapper[4727]: I1121 22:21:08.499400 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:21:08 crc kubenswrapper[4727]: E1121 22:21:08.500664 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:21:10 crc kubenswrapper[4727]: I1121 22:21:10.298091 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:21:10 crc kubenswrapper[4727]: I1121 22:21:10.373137 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:21:10 crc kubenswrapper[4727]: I1121 22:21:10.543502 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g9v4p"] Nov 21 22:21:12 crc kubenswrapper[4727]: I1121 22:21:12.024737 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g9v4p" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerName="registry-server" containerID="cri-o://35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54" gracePeriod=2 Nov 21 22:21:12 crc kubenswrapper[4727]: I1121 22:21:12.625950 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:21:12 crc kubenswrapper[4727]: I1121 22:21:12.805780 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-catalog-content\") pod \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " Nov 21 22:21:12 crc kubenswrapper[4727]: I1121 22:21:12.805870 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-utilities\") pod \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " Nov 21 22:21:12 crc kubenswrapper[4727]: I1121 22:21:12.806082 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf6w5\" (UniqueName: \"kubernetes.io/projected/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-kube-api-access-rf6w5\") pod \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\" (UID: \"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb\") " Nov 21 22:21:12 crc kubenswrapper[4727]: I1121 22:21:12.807280 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-utilities" (OuterVolumeSpecName: "utilities") pod "3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" (UID: "3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:21:12 crc kubenswrapper[4727]: I1121 22:21:12.822310 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-kube-api-access-rf6w5" (OuterVolumeSpecName: "kube-api-access-rf6w5") pod "3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" (UID: "3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb"). InnerVolumeSpecName "kube-api-access-rf6w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:21:12 crc kubenswrapper[4727]: I1121 22:21:12.916099 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf6w5\" (UniqueName: \"kubernetes.io/projected/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-kube-api-access-rf6w5\") on node \"crc\" DevicePath \"\"" Nov 21 22:21:12 crc kubenswrapper[4727]: I1121 22:21:12.916142 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:21:12 crc kubenswrapper[4727]: I1121 22:21:12.939878 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" (UID: "3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.017995 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.035612 4727 generic.go:334] "Generic (PLEG): container finished" podID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerID="35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54" exitCode=0 Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.035649 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9v4p" event={"ID":"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb","Type":"ContainerDied","Data":"35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54"} Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.035699 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g9v4p" event={"ID":"3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb","Type":"ContainerDied","Data":"3b80c2489ee615361df9cced99628a195abb6d2ff3c872808de0983185e9a16d"} Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.035716 4727 scope.go:117] "RemoveContainer" containerID="35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.036309 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g9v4p" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.073134 4727 scope.go:117] "RemoveContainer" containerID="7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.091115 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g9v4p"] Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.100873 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g9v4p"] Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.104273 4727 scope.go:117] "RemoveContainer" containerID="b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.175701 4727 scope.go:117] "RemoveContainer" containerID="35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54" Nov 21 22:21:13 crc kubenswrapper[4727]: E1121 22:21:13.176217 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54\": container with ID starting with 35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54 not found: ID does not exist" containerID="35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.176275 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54"} err="failed to get container status \"35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54\": rpc error: code = NotFound desc = could not find container \"35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54\": container with ID starting with 35763fc63f59e62619209039d5807a2f70becbbee81e729c3d79c1238e350b54 not found: ID does not exist" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.176309 4727 scope.go:117] "RemoveContainer" containerID="7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae" Nov 21 22:21:13 crc kubenswrapper[4727]: E1121 22:21:13.176852 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae\": container with ID starting with 7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae not found: ID does not exist" containerID="7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.176886 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae"} err="failed to get container status \"7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae\": rpc error: code = NotFound desc = could not find container \"7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae\": container with ID starting with 7de63f35c9d7d5008e9e23a799af94099caa2e7a91673d73492ac1c3ef154fae not found: ID does not exist" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.176924 4727 scope.go:117] "RemoveContainer" containerID="b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99" Nov 21 22:21:13 crc kubenswrapper[4727]: E1121 22:21:13.177375 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99\": container with ID starting with b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99 not found: ID does not exist" containerID="b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.177432 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99"} err="failed to get container status \"b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99\": rpc error: code = NotFound desc = could not find container \"b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99\": container with ID starting with b2c60e67e6a1cc46cf26cdbab7855262b0a00f653256dbeb382720ac21fb0a99 not found: ID does not exist" Nov 21 22:21:13 crc kubenswrapper[4727]: I1121 22:21:13.523885 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" path="/var/lib/kubelet/pods/3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb/volumes" Nov 21 22:21:23 crc kubenswrapper[4727]: I1121 22:21:23.502259 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:21:23 crc kubenswrapper[4727]: E1121 22:21:23.502880 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:21:35 crc kubenswrapper[4727]: I1121 22:21:35.516692 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:21:35 crc kubenswrapper[4727]: E1121 22:21:35.517910 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.160568 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lkcrn/must-gather-w9bms"] Nov 21 22:21:46 crc kubenswrapper[4727]: E1121 22:21:46.161418 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerName="extract-utilities" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.161430 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerName="extract-utilities" Nov 21 22:21:46 crc kubenswrapper[4727]: E1121 22:21:46.161462 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerName="registry-server" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.161468 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerName="registry-server" Nov 21 22:21:46 crc kubenswrapper[4727]: E1121 22:21:46.161500 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerName="extract-content" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.161505 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerName="extract-content" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.161733 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8dd6a2-5e3a-448e-bb71-12c3cd50c8cb" containerName="registry-server" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.165433 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/must-gather-w9bms" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.170097 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lkcrn"/"openshift-service-ca.crt" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.170097 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lkcrn"/"default-dockercfg-rcpqs" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.170540 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lkcrn"/"kube-root-ca.crt" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.193935 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lkcrn/must-gather-w9bms"] Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.260206 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7163d876-0482-4f49-a281-3699fdb0d041-must-gather-output\") pod \"must-gather-w9bms\" (UID: \"7163d876-0482-4f49-a281-3699fdb0d041\") " pod="openshift-must-gather-lkcrn/must-gather-w9bms" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.260373 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7wl\" (UniqueName: \"kubernetes.io/projected/7163d876-0482-4f49-a281-3699fdb0d041-kube-api-access-hq7wl\") pod \"must-gather-w9bms\" (UID: \"7163d876-0482-4f49-a281-3699fdb0d041\") " pod="openshift-must-gather-lkcrn/must-gather-w9bms" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.362779 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7wl\" (UniqueName: \"kubernetes.io/projected/7163d876-0482-4f49-a281-3699fdb0d041-kube-api-access-hq7wl\") pod \"must-gather-w9bms\" (UID: \"7163d876-0482-4f49-a281-3699fdb0d041\") " pod="openshift-must-gather-lkcrn/must-gather-w9bms" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.362914 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7163d876-0482-4f49-a281-3699fdb0d041-must-gather-output\") pod \"must-gather-w9bms\" (UID: \"7163d876-0482-4f49-a281-3699fdb0d041\") " pod="openshift-must-gather-lkcrn/must-gather-w9bms" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.363379 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7163d876-0482-4f49-a281-3699fdb0d041-must-gather-output\") pod \"must-gather-w9bms\" (UID: \"7163d876-0482-4f49-a281-3699fdb0d041\") " pod="openshift-must-gather-lkcrn/must-gather-w9bms" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.386066 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7wl\" (UniqueName: \"kubernetes.io/projected/7163d876-0482-4f49-a281-3699fdb0d041-kube-api-access-hq7wl\") pod \"must-gather-w9bms\" (UID: \"7163d876-0482-4f49-a281-3699fdb0d041\") " pod="openshift-must-gather-lkcrn/must-gather-w9bms" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.484663 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/must-gather-w9bms" Nov 21 22:21:46 crc kubenswrapper[4727]: I1121 22:21:46.975267 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lkcrn/must-gather-w9bms"] Nov 21 22:21:46 crc kubenswrapper[4727]: W1121 22:21:46.986128 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7163d876_0482_4f49_a281_3699fdb0d041.slice/crio-35717500d6f47668b5be7c41d493b1f1bfe029928736942b0a9bc1cbc9240618 WatchSource:0}: Error finding container 35717500d6f47668b5be7c41d493b1f1bfe029928736942b0a9bc1cbc9240618: Status 404 returned error can't find the container with id 35717500d6f47668b5be7c41d493b1f1bfe029928736942b0a9bc1cbc9240618 Nov 21 22:21:47 crc kubenswrapper[4727]: I1121 22:21:47.691475 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/must-gather-w9bms" event={"ID":"7163d876-0482-4f49-a281-3699fdb0d041","Type":"ContainerStarted","Data":"35717500d6f47668b5be7c41d493b1f1bfe029928736942b0a9bc1cbc9240618"} Nov 21 22:21:48 crc kubenswrapper[4727]: I1121 22:21:48.500432 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:21:49 crc kubenswrapper[4727]: I1121 22:21:49.798025 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"e632dbef58939d9e72ed84441bc629b2c51b72220ef130e0c351e2c9205f8637"} Nov 21 22:21:56 crc kubenswrapper[4727]: I1121 22:21:56.891900 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/must-gather-w9bms" event={"ID":"7163d876-0482-4f49-a281-3699fdb0d041","Type":"ContainerStarted","Data":"89344a399a424edc6b5f4abfc8eae2b8a85409c9837b5dff10371a578b855aa4"} Nov 21 22:21:57 crc kubenswrapper[4727]: I1121 22:21:57.908944 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/must-gather-w9bms" event={"ID":"7163d876-0482-4f49-a281-3699fdb0d041","Type":"ContainerStarted","Data":"dc98478475acd6b437909546c8b39f212a1bfa0b3c8a1e87f37a5b8cf01675c2"} Nov 21 22:21:57 crc kubenswrapper[4727]: I1121 22:21:57.938218 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lkcrn/must-gather-w9bms" podStartSLOduration=2.429504736 podStartE2EDuration="11.938194865s" podCreationTimestamp="2025-11-21 22:21:46 +0000 UTC" firstStartedPulling="2025-11-21 22:21:46.988468828 +0000 UTC m=+8112.174653872" lastFinishedPulling="2025-11-21 22:21:56.497158917 +0000 UTC m=+8121.683344001" observedRunningTime="2025-11-21 22:21:57.926527814 +0000 UTC m=+8123.112712868" watchObservedRunningTime="2025-11-21 22:21:57.938194865 +0000 UTC m=+8123.124379919" Nov 21 22:22:03 crc kubenswrapper[4727]: I1121 22:22:03.070762 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lkcrn/crc-debug-sv9r8"] Nov 21 22:22:03 crc kubenswrapper[4727]: I1121 22:22:03.072675 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" Nov 21 22:22:03 crc kubenswrapper[4727]: I1121 22:22:03.168448 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48d77315-e5e3-422f-bb33-881f83339d9b-host\") pod \"crc-debug-sv9r8\" (UID: \"48d77315-e5e3-422f-bb33-881f83339d9b\") " pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" Nov 21 22:22:03 crc kubenswrapper[4727]: I1121 22:22:03.168535 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7svfr\" (UniqueName: \"kubernetes.io/projected/48d77315-e5e3-422f-bb33-881f83339d9b-kube-api-access-7svfr\") pod \"crc-debug-sv9r8\" (UID: \"48d77315-e5e3-422f-bb33-881f83339d9b\") " pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" Nov 21 22:22:03 crc kubenswrapper[4727]: I1121 22:22:03.272702 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48d77315-e5e3-422f-bb33-881f83339d9b-host\") pod \"crc-debug-sv9r8\" (UID: \"48d77315-e5e3-422f-bb33-881f83339d9b\") " pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" Nov 21 22:22:03 crc kubenswrapper[4727]: I1121 22:22:03.272991 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7svfr\" (UniqueName: \"kubernetes.io/projected/48d77315-e5e3-422f-bb33-881f83339d9b-kube-api-access-7svfr\") pod \"crc-debug-sv9r8\" (UID: \"48d77315-e5e3-422f-bb33-881f83339d9b\") " pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" Nov 21 22:22:03 crc kubenswrapper[4727]: I1121 22:22:03.273420 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48d77315-e5e3-422f-bb33-881f83339d9b-host\") pod \"crc-debug-sv9r8\" (UID: \"48d77315-e5e3-422f-bb33-881f83339d9b\") " pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" Nov 21 22:22:03 crc kubenswrapper[4727]: I1121 22:22:03.297801 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7svfr\" (UniqueName: \"kubernetes.io/projected/48d77315-e5e3-422f-bb33-881f83339d9b-kube-api-access-7svfr\") pod \"crc-debug-sv9r8\" (UID: \"48d77315-e5e3-422f-bb33-881f83339d9b\") " pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" Nov 21 22:22:03 crc kubenswrapper[4727]: I1121 22:22:03.392665 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" Nov 21 22:22:03 crc kubenswrapper[4727]: I1121 22:22:03.991785 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" event={"ID":"48d77315-e5e3-422f-bb33-881f83339d9b","Type":"ContainerStarted","Data":"155363a3ca10c1335a7b1fcd0c229919f9b262be658abd9c2c345f7dceffed0f"} Nov 21 22:22:15 crc kubenswrapper[4727]: I1121 22:22:15.106330 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" event={"ID":"48d77315-e5e3-422f-bb33-881f83339d9b","Type":"ContainerStarted","Data":"fae5efa887bb56726e3d36ed62d4378c174101d86ce21bfc647552d4079e0a96"} Nov 21 22:22:15 crc kubenswrapper[4727]: I1121 22:22:15.122550 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" podStartSLOduration=1.378500516 podStartE2EDuration="12.122534858s" podCreationTimestamp="2025-11-21 22:22:03 +0000 UTC" firstStartedPulling="2025-11-21 22:22:03.426333373 +0000 UTC m=+8128.612518417" lastFinishedPulling="2025-11-21 22:22:14.170367715 +0000 UTC m=+8139.356552759" observedRunningTime="2025-11-21 22:22:15.119547526 +0000 UTC m=+8140.305732570" watchObservedRunningTime="2025-11-21 22:22:15.122534858 +0000 UTC m=+8140.308719902" Nov 21 22:23:06 crc kubenswrapper[4727]: I1121 22:23:06.682924 4727 generic.go:334] "Generic (PLEG): container finished" podID="48d77315-e5e3-422f-bb33-881f83339d9b" containerID="fae5efa887bb56726e3d36ed62d4378c174101d86ce21bfc647552d4079e0a96" exitCode=0 Nov 21 22:23:06 crc kubenswrapper[4727]: I1121 22:23:06.683020 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" event={"ID":"48d77315-e5e3-422f-bb33-881f83339d9b","Type":"ContainerDied","Data":"fae5efa887bb56726e3d36ed62d4378c174101d86ce21bfc647552d4079e0a96"} Nov 21 22:23:07 crc kubenswrapper[4727]: I1121 22:23:07.828089 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" Nov 21 22:23:07 crc kubenswrapper[4727]: I1121 22:23:07.878748 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lkcrn/crc-debug-sv9r8"] Nov 21 22:23:07 crc kubenswrapper[4727]: I1121 22:23:07.892159 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lkcrn/crc-debug-sv9r8"] Nov 21 22:23:07 crc kubenswrapper[4727]: I1121 22:23:07.892260 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7svfr\" (UniqueName: \"kubernetes.io/projected/48d77315-e5e3-422f-bb33-881f83339d9b-kube-api-access-7svfr\") pod \"48d77315-e5e3-422f-bb33-881f83339d9b\" (UID: \"48d77315-e5e3-422f-bb33-881f83339d9b\") " Nov 21 22:23:07 crc kubenswrapper[4727]: I1121 22:23:07.892296 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48d77315-e5e3-422f-bb33-881f83339d9b-host\") pod \"48d77315-e5e3-422f-bb33-881f83339d9b\" (UID: \"48d77315-e5e3-422f-bb33-881f83339d9b\") " Nov 21 22:23:07 crc kubenswrapper[4727]: I1121 22:23:07.892396 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48d77315-e5e3-422f-bb33-881f83339d9b-host" (OuterVolumeSpecName: "host") pod "48d77315-e5e3-422f-bb33-881f83339d9b" (UID: "48d77315-e5e3-422f-bb33-881f83339d9b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 22:23:07 crc kubenswrapper[4727]: I1121 22:23:07.892936 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48d77315-e5e3-422f-bb33-881f83339d9b-host\") on node \"crc\" DevicePath \"\"" Nov 21 22:23:07 crc kubenswrapper[4727]: I1121 22:23:07.900247 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d77315-e5e3-422f-bb33-881f83339d9b-kube-api-access-7svfr" (OuterVolumeSpecName: "kube-api-access-7svfr") pod "48d77315-e5e3-422f-bb33-881f83339d9b" (UID: "48d77315-e5e3-422f-bb33-881f83339d9b"). InnerVolumeSpecName "kube-api-access-7svfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:23:07 crc kubenswrapper[4727]: I1121 22:23:07.994460 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7svfr\" (UniqueName: \"kubernetes.io/projected/48d77315-e5e3-422f-bb33-881f83339d9b-kube-api-access-7svfr\") on node \"crc\" DevicePath \"\"" Nov 21 22:23:08 crc kubenswrapper[4727]: I1121 22:23:08.708384 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="155363a3ca10c1335a7b1fcd0c229919f9b262be658abd9c2c345f7dceffed0f" Nov 21 22:23:08 crc kubenswrapper[4727]: I1121 22:23:08.708481 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-sv9r8" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.069828 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lkcrn/crc-debug-dnhr7"] Nov 21 22:23:09 crc kubenswrapper[4727]: E1121 22:23:09.070895 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d77315-e5e3-422f-bb33-881f83339d9b" containerName="container-00" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.070931 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d77315-e5e3-422f-bb33-881f83339d9b" containerName="container-00" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.071649 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d77315-e5e3-422f-bb33-881f83339d9b" containerName="container-00" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.079642 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.123100 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a902e30f-4bd3-42a8-943a-fbf24117cce0-host\") pod \"crc-debug-dnhr7\" (UID: \"a902e30f-4bd3-42a8-943a-fbf24117cce0\") " pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.123448 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhcmg\" (UniqueName: \"kubernetes.io/projected/a902e30f-4bd3-42a8-943a-fbf24117cce0-kube-api-access-nhcmg\") pod \"crc-debug-dnhr7\" (UID: \"a902e30f-4bd3-42a8-943a-fbf24117cce0\") " pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.226334 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a902e30f-4bd3-42a8-943a-fbf24117cce0-host\") pod \"crc-debug-dnhr7\" (UID: \"a902e30f-4bd3-42a8-943a-fbf24117cce0\") " pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.226469 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a902e30f-4bd3-42a8-943a-fbf24117cce0-host\") pod \"crc-debug-dnhr7\" (UID: \"a902e30f-4bd3-42a8-943a-fbf24117cce0\") " pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.226477 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhcmg\" (UniqueName: \"kubernetes.io/projected/a902e30f-4bd3-42a8-943a-fbf24117cce0-kube-api-access-nhcmg\") pod \"crc-debug-dnhr7\" (UID: \"a902e30f-4bd3-42a8-943a-fbf24117cce0\") " pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.248629 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhcmg\" (UniqueName: \"kubernetes.io/projected/a902e30f-4bd3-42a8-943a-fbf24117cce0-kube-api-access-nhcmg\") pod \"crc-debug-dnhr7\" (UID: \"a902e30f-4bd3-42a8-943a-fbf24117cce0\") " pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.405975 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.515613 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48d77315-e5e3-422f-bb33-881f83339d9b" path="/var/lib/kubelet/pods/48d77315-e5e3-422f-bb33-881f83339d9b/volumes" Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.722810 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" event={"ID":"a902e30f-4bd3-42a8-943a-fbf24117cce0","Type":"ContainerStarted","Data":"61995725acb94adc30e73fb6a5eecbe17f74dd1045e05d656b41910a40f6547e"} Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.723134 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" event={"ID":"a902e30f-4bd3-42a8-943a-fbf24117cce0","Type":"ContainerStarted","Data":"5e3c6a1b0962e087c72dcf22286f3ea3ed4e77f52199d792779eabb5f118d0de"} Nov 21 22:23:09 crc kubenswrapper[4727]: I1121 22:23:09.747662 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" podStartSLOduration=0.747645175 podStartE2EDuration="747.645175ms" podCreationTimestamp="2025-11-21 22:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 22:23:09.736732582 +0000 UTC m=+8194.922917666" watchObservedRunningTime="2025-11-21 22:23:09.747645175 +0000 UTC m=+8194.933830219" Nov 21 22:23:10 crc kubenswrapper[4727]: I1121 22:23:10.734448 4727 generic.go:334] "Generic (PLEG): container finished" podID="a902e30f-4bd3-42a8-943a-fbf24117cce0" containerID="61995725acb94adc30e73fb6a5eecbe17f74dd1045e05d656b41910a40f6547e" exitCode=0 Nov 21 22:23:10 crc kubenswrapper[4727]: I1121 22:23:10.734537 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" event={"ID":"a902e30f-4bd3-42a8-943a-fbf24117cce0","Type":"ContainerDied","Data":"61995725acb94adc30e73fb6a5eecbe17f74dd1045e05d656b41910a40f6547e"} Nov 21 22:23:11 crc kubenswrapper[4727]: I1121 22:23:11.850253 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" Nov 21 22:23:11 crc kubenswrapper[4727]: I1121 22:23:11.890579 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a902e30f-4bd3-42a8-943a-fbf24117cce0-host\") pod \"a902e30f-4bd3-42a8-943a-fbf24117cce0\" (UID: \"a902e30f-4bd3-42a8-943a-fbf24117cce0\") " Nov 21 22:23:11 crc kubenswrapper[4727]: I1121 22:23:11.890756 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhcmg\" (UniqueName: \"kubernetes.io/projected/a902e30f-4bd3-42a8-943a-fbf24117cce0-kube-api-access-nhcmg\") pod \"a902e30f-4bd3-42a8-943a-fbf24117cce0\" (UID: \"a902e30f-4bd3-42a8-943a-fbf24117cce0\") " Nov 21 22:23:11 crc kubenswrapper[4727]: I1121 22:23:11.894615 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a902e30f-4bd3-42a8-943a-fbf24117cce0-host" (OuterVolumeSpecName: "host") pod "a902e30f-4bd3-42a8-943a-fbf24117cce0" (UID: "a902e30f-4bd3-42a8-943a-fbf24117cce0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 22:23:11 crc kubenswrapper[4727]: I1121 22:23:11.900158 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a902e30f-4bd3-42a8-943a-fbf24117cce0-kube-api-access-nhcmg" (OuterVolumeSpecName: "kube-api-access-nhcmg") pod "a902e30f-4bd3-42a8-943a-fbf24117cce0" (UID: "a902e30f-4bd3-42a8-943a-fbf24117cce0"). InnerVolumeSpecName "kube-api-access-nhcmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:23:11 crc kubenswrapper[4727]: I1121 22:23:11.992968 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhcmg\" (UniqueName: \"kubernetes.io/projected/a902e30f-4bd3-42a8-943a-fbf24117cce0-kube-api-access-nhcmg\") on node \"crc\" DevicePath \"\"" Nov 21 22:23:11 crc kubenswrapper[4727]: I1121 22:23:11.993271 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a902e30f-4bd3-42a8-943a-fbf24117cce0-host\") on node \"crc\" DevicePath \"\"" Nov 21 22:23:12 crc kubenswrapper[4727]: I1121 22:23:12.170539 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lkcrn/crc-debug-dnhr7"] Nov 21 22:23:12 crc kubenswrapper[4727]: I1121 22:23:12.179464 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lkcrn/crc-debug-dnhr7"] Nov 21 22:23:12 crc kubenswrapper[4727]: I1121 22:23:12.757976 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e3c6a1b0962e087c72dcf22286f3ea3ed4e77f52199d792779eabb5f118d0de" Nov 21 22:23:12 crc kubenswrapper[4727]: I1121 22:23:12.758043 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-dnhr7" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.331339 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lkcrn/crc-debug-jgrzm"] Nov 21 22:23:13 crc kubenswrapper[4727]: E1121 22:23:13.331977 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a902e30f-4bd3-42a8-943a-fbf24117cce0" containerName="container-00" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.331990 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a902e30f-4bd3-42a8-943a-fbf24117cce0" containerName="container-00" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.332252 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a902e30f-4bd3-42a8-943a-fbf24117cce0" containerName="container-00" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.333002 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.352934 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftr4q\" (UniqueName: \"kubernetes.io/projected/1b692124-5248-4263-9c61-0fc4b0df7fde-kube-api-access-ftr4q\") pod \"crc-debug-jgrzm\" (UID: \"1b692124-5248-4263-9c61-0fc4b0df7fde\") " pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.353094 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b692124-5248-4263-9c61-0fc4b0df7fde-host\") pod \"crc-debug-jgrzm\" (UID: \"1b692124-5248-4263-9c61-0fc4b0df7fde\") " pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.455780 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftr4q\" (UniqueName: \"kubernetes.io/projected/1b692124-5248-4263-9c61-0fc4b0df7fde-kube-api-access-ftr4q\") pod \"crc-debug-jgrzm\" (UID: \"1b692124-5248-4263-9c61-0fc4b0df7fde\") " pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.455872 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b692124-5248-4263-9c61-0fc4b0df7fde-host\") pod \"crc-debug-jgrzm\" (UID: \"1b692124-5248-4263-9c61-0fc4b0df7fde\") " pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.456111 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b692124-5248-4263-9c61-0fc4b0df7fde-host\") pod \"crc-debug-jgrzm\" (UID: \"1b692124-5248-4263-9c61-0fc4b0df7fde\") " pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.474317 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftr4q\" (UniqueName: \"kubernetes.io/projected/1b692124-5248-4263-9c61-0fc4b0df7fde-kube-api-access-ftr4q\") pod \"crc-debug-jgrzm\" (UID: \"1b692124-5248-4263-9c61-0fc4b0df7fde\") " pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.528214 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a902e30f-4bd3-42a8-943a-fbf24117cce0" path="/var/lib/kubelet/pods/a902e30f-4bd3-42a8-943a-fbf24117cce0/volumes" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.648691 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" Nov 21 22:23:13 crc kubenswrapper[4727]: I1121 22:23:13.786237 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" event={"ID":"1b692124-5248-4263-9c61-0fc4b0df7fde","Type":"ContainerStarted","Data":"c590497dd19e795139fc7a3ebe9bdefc3edf3d2bd483d7cd8c3e484f3f762378"} Nov 21 22:23:14 crc kubenswrapper[4727]: I1121 22:23:14.801580 4727 generic.go:334] "Generic (PLEG): container finished" podID="1b692124-5248-4263-9c61-0fc4b0df7fde" containerID="80b59d2581f9507d3584aa89b0af0f6554820fc86812a420f10bc92bb33410f2" exitCode=0 Nov 21 22:23:14 crc kubenswrapper[4727]: I1121 22:23:14.801672 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" event={"ID":"1b692124-5248-4263-9c61-0fc4b0df7fde","Type":"ContainerDied","Data":"80b59d2581f9507d3584aa89b0af0f6554820fc86812a420f10bc92bb33410f2"} Nov 21 22:23:14 crc kubenswrapper[4727]: I1121 22:23:14.857059 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lkcrn/crc-debug-jgrzm"] Nov 21 22:23:14 crc kubenswrapper[4727]: I1121 22:23:14.873716 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lkcrn/crc-debug-jgrzm"] Nov 21 22:23:15 crc kubenswrapper[4727]: I1121 22:23:15.973185 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" Nov 21 22:23:16 crc kubenswrapper[4727]: I1121 22:23:16.124589 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b692124-5248-4263-9c61-0fc4b0df7fde-host\") pod \"1b692124-5248-4263-9c61-0fc4b0df7fde\" (UID: \"1b692124-5248-4263-9c61-0fc4b0df7fde\") " Nov 21 22:23:16 crc kubenswrapper[4727]: I1121 22:23:16.125034 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b692124-5248-4263-9c61-0fc4b0df7fde-host" (OuterVolumeSpecName: "host") pod "1b692124-5248-4263-9c61-0fc4b0df7fde" (UID: "1b692124-5248-4263-9c61-0fc4b0df7fde"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 22:23:16 crc kubenswrapper[4727]: I1121 22:23:16.125488 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftr4q\" (UniqueName: \"kubernetes.io/projected/1b692124-5248-4263-9c61-0fc4b0df7fde-kube-api-access-ftr4q\") pod \"1b692124-5248-4263-9c61-0fc4b0df7fde\" (UID: \"1b692124-5248-4263-9c61-0fc4b0df7fde\") " Nov 21 22:23:16 crc kubenswrapper[4727]: I1121 22:23:16.126297 4727 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b692124-5248-4263-9c61-0fc4b0df7fde-host\") on node \"crc\" DevicePath \"\"" Nov 21 22:23:16 crc kubenswrapper[4727]: I1121 22:23:16.131301 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b692124-5248-4263-9c61-0fc4b0df7fde-kube-api-access-ftr4q" (OuterVolumeSpecName: "kube-api-access-ftr4q") pod "1b692124-5248-4263-9c61-0fc4b0df7fde" (UID: "1b692124-5248-4263-9c61-0fc4b0df7fde"). InnerVolumeSpecName "kube-api-access-ftr4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:23:16 crc kubenswrapper[4727]: I1121 22:23:16.228995 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftr4q\" (UniqueName: \"kubernetes.io/projected/1b692124-5248-4263-9c61-0fc4b0df7fde-kube-api-access-ftr4q\") on node \"crc\" DevicePath \"\"" Nov 21 22:23:16 crc kubenswrapper[4727]: I1121 22:23:16.822361 4727 scope.go:117] "RemoveContainer" containerID="80b59d2581f9507d3584aa89b0af0f6554820fc86812a420f10bc92bb33410f2" Nov 21 22:23:16 crc kubenswrapper[4727]: I1121 22:23:16.822407 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/crc-debug-jgrzm" Nov 21 22:23:17 crc kubenswrapper[4727]: I1121 22:23:17.510707 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b692124-5248-4263-9c61-0fc4b0df7fde" path="/var/lib/kubelet/pods/1b692124-5248-4263-9c61-0fc4b0df7fde/volumes" Nov 21 22:23:39 crc kubenswrapper[4727]: I1121 22:23:39.501562 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5c944475-f104-4512-8a86-1139d57d331f/aodh-api/0.log" Nov 21 22:23:39 crc kubenswrapper[4727]: I1121 22:23:39.674641 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5c944475-f104-4512-8a86-1139d57d331f/aodh-listener/0.log" Nov 21 22:23:39 crc kubenswrapper[4727]: I1121 22:23:39.676064 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5c944475-f104-4512-8a86-1139d57d331f/aodh-evaluator/0.log" Nov 21 22:23:39 crc kubenswrapper[4727]: I1121 22:23:39.718856 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_5c944475-f104-4512-8a86-1139d57d331f/aodh-notifier/0.log" Nov 21 22:23:39 crc kubenswrapper[4727]: I1121 22:23:39.835657 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6cc6c58b8b-47npz_c49bf05b-cf2c-4e00-a141-a7249d2eb68f/barbican-api/0.log" Nov 21 22:23:39 crc kubenswrapper[4727]: I1121 22:23:39.868446 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6cc6c58b8b-47npz_c49bf05b-cf2c-4e00-a141-a7249d2eb68f/barbican-api-log/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.028515 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-76cd59449d-bqxf9_c0776376-5bcf-42fb-95fa-537a1b9764e2/barbican-keystone-listener/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.126574 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-76cd59449d-bqxf9_c0776376-5bcf-42fb-95fa-537a1b9764e2/barbican-keystone-listener-log/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.179765 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-556d8f578c-rnhrx_76a929c8-c9a2-480f-b413-9df259d38d39/barbican-worker/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.233764 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-556d8f578c-rnhrx_76a929c8-c9a2-480f-b413-9df259d38d39/barbican-worker-log/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.384450 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xqh5w_766225b4-86ef-4a28-9321-7efd20c20c8b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.460490 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_038d5bff-73d1-4dd6-a5ca-6501e07617a1/ceilometer-central-agent/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.530318 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_038d5bff-73d1-4dd6-a5ca-6501e07617a1/ceilometer-notification-agent/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.580454 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_038d5bff-73d1-4dd6-a5ca-6501e07617a1/proxy-httpd/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.641610 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_038d5bff-73d1-4dd6-a5ca-6501e07617a1/sg-core/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.820154 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6/cinder-api/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.829354 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_515a4f45-bb89-4e9d-87b7-4a4e11f0f8c6/cinder-api-log/0.log" Nov 21 22:23:40 crc kubenswrapper[4727]: I1121 22:23:40.948284 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3570e4bf-cf68-4cdf-a37d-f63090685a4c/cinder-scheduler/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.040170 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3570e4bf-cf68-4cdf-a37d-f63090685a4c/probe/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.064786 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-l5vgp_cc54f580-9f3c-451d-a6c3-a9d5c12c7915/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.213622 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-r8sfv_3e04e3f4-33d0-4dd0-84d9-6ef378cb2434/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.248687 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-gz7cn_e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f/init/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.484952 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-gz7cn_e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f/init/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.554608 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hkzn2_fe06ea2f-15b5-409a-93b8-6c40a629c029/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.611860 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-gz7cn_e8ed0d14-d6da-4c59-8c10-98ce4a0fe54f/dnsmasq-dns/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.784366 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_bbaf0e5f-5f38-406f-84d8-eb76acf9727d/glance-httpd/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.799334 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_bbaf0e5f-5f38-406f-84d8-eb76acf9727d/glance-log/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.993062 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_242aec5e-01e3-4559-9031-40a5c69c5f0a/glance-httpd/0.log" Nov 21 22:23:41 crc kubenswrapper[4727]: I1121 22:23:41.997611 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_242aec5e-01e3-4559-9031-40a5c69c5f0a/glance-log/0.log" Nov 21 22:23:42 crc kubenswrapper[4727]: I1121 22:23:42.339084 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-77cbcd7b74-wdstp_3923d481-b8b8-4b23-b8eb-c3c1589cdaf5/heat-api/0.log" Nov 21 22:23:42 crc kubenswrapper[4727]: I1121 22:23:42.383879 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5dc877fb6-76mc5_67535f0b-486b-4aa4-973e-32c3dd01d514/heat-cfnapi/0.log" Nov 21 22:23:42 crc kubenswrapper[4727]: I1121 22:23:42.490812 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-58f8c4fcbf-cqdrx_df9499f8-a2a6-4d5c-89f7-6119cc985cc2/heat-engine/0.log" Nov 21 22:23:42 crc kubenswrapper[4727]: I1121 22:23:42.528391 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kffp8_8f6ac347-e773-4986-bf2b-54a1bb00047f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:42 crc kubenswrapper[4727]: I1121 22:23:42.737949 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-l9spl_27de261d-6864-4d22-8b4a-9523d74fb4fc/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:42 crc kubenswrapper[4727]: I1121 22:23:42.949238 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395981-6csjp_e9fdb78b-ace5-4ba0-a791-deab9b88bc05/keystone-cron/0.log" Nov 21 22:23:43 crc kubenswrapper[4727]: I1121 22:23:43.055746 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29396041-g92qj_bb7ab91c-96fc-4420-bd6d-e6840b74e1fc/keystone-cron/0.log" Nov 21 22:23:43 crc kubenswrapper[4727]: I1121 22:23:43.167267 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e14d03ba-964e-4f12-9c54-0aff9e874f1d/kube-state-metrics/0.log" Nov 21 22:23:43 crc kubenswrapper[4727]: I1121 22:23:43.327777 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-42g5g_b1fa34fd-2887-416b-a02a-79424f936670/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:43 crc kubenswrapper[4727]: I1121 22:23:43.427186 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cc454d9c9-g5hjz_ac8be1ea-fec5-4c56-9602-9c9cdec8e812/keystone-api/0.log" Nov 21 22:23:43 crc kubenswrapper[4727]: I1121 22:23:43.442399 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-g6mt8_06a62eb8-65b3-4e96-8103-e9386bbca277/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:43 crc kubenswrapper[4727]: I1121 22:23:43.649399 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_f73e967f-ae43-4ef4-8648-768d1602b07b/mysqld-exporter/0.log" Nov 21 22:23:44 crc kubenswrapper[4727]: I1121 22:23:44.058264 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-rk54k_945a8b1a-2ae7-449a-b7b5-206b435b2d19/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:44 crc kubenswrapper[4727]: I1121 22:23:44.129630 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d66895777-vztk9_0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984/neutron-httpd/0.log" Nov 21 22:23:44 crc kubenswrapper[4727]: I1121 22:23:44.145215 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d66895777-vztk9_0a2d9ca1-f8a7-4dbf-be6d-de4ed17dd984/neutron-api/0.log" Nov 21 22:23:44 crc kubenswrapper[4727]: I1121 22:23:44.586660 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5d4eb186-2b59-4b16-bd67-0b9f64c233a6/nova-cell0-conductor-conductor/0.log" Nov 21 22:23:44 crc kubenswrapper[4727]: I1121 22:23:44.924628 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d634e409-018c-4c23-a01b-8abbfc218164/nova-cell1-conductor-conductor/0.log" Nov 21 22:23:45 crc kubenswrapper[4727]: I1121 22:23:45.209095 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4a519050-c5ec-4a64-9280-65e6b3f299b8/nova-cell1-novncproxy-novncproxy/0.log" Nov 21 22:23:45 crc kubenswrapper[4727]: I1121 22:23:45.436627 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7918d86d-0dcf-4ec6-a875-6e195db4e361/nova-api-log/0.log" Nov 21 22:23:45 crc kubenswrapper[4727]: I1121 22:23:45.483847 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6k6pn_1265cd44-fdf4-434d-855e-375dcbb70601/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:45 crc kubenswrapper[4727]: I1121 22:23:45.724059 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6b40e144-3a25-45f2-9ea2-f81e211510e2/nova-metadata-log/0.log" Nov 21 22:23:45 crc kubenswrapper[4727]: I1121 22:23:45.905161 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7918d86d-0dcf-4ec6-a875-6e195db4e361/nova-api-api/0.log" Nov 21 22:23:46 crc kubenswrapper[4727]: I1121 22:23:46.073624 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1b25ac5c-f457-49cb-9c54-15115c3a6108/nova-scheduler-scheduler/0.log" Nov 21 22:23:46 crc kubenswrapper[4727]: I1121 22:23:46.078949 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d83275c-cf9b-425e-8e63-6130e2866a49/mysql-bootstrap/0.log" Nov 21 22:23:46 crc kubenswrapper[4727]: I1121 22:23:46.268526 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d83275c-cf9b-425e-8e63-6130e2866a49/galera/0.log" Nov 21 22:23:46 crc kubenswrapper[4727]: I1121 22:23:46.298681 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d83275c-cf9b-425e-8e63-6130e2866a49/mysql-bootstrap/0.log" Nov 21 22:23:46 crc kubenswrapper[4727]: I1121 22:23:46.471626 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0901b5dd-6fbf-4a40-8d26-ab792ea7f110/mysql-bootstrap/0.log" Nov 21 22:23:46 crc kubenswrapper[4727]: I1121 22:23:46.646864 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0901b5dd-6fbf-4a40-8d26-ab792ea7f110/mysql-bootstrap/0.log" Nov 21 22:23:46 crc kubenswrapper[4727]: I1121 22:23:46.654135 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0901b5dd-6fbf-4a40-8d26-ab792ea7f110/galera/0.log" Nov 21 22:23:46 crc kubenswrapper[4727]: I1121 22:23:46.821401 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a4349594-5d5b-4a77-8571-88be061ab039/openstackclient/0.log" Nov 21 22:23:46 crc kubenswrapper[4727]: I1121 22:23:46.962870 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-k8fk5_5668e228-8946-468d-94e0-fa77489e46b3/ovn-controller/0.log" Nov 21 22:23:47 crc kubenswrapper[4727]: I1121 22:23:47.099829 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kl4qp_2e37b08f-0bcc-4c28-8113-65a424a49717/openstack-network-exporter/0.log" Nov 21 22:23:47 crc kubenswrapper[4727]: I1121 22:23:47.292755 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w77gs_f3f53875-1801-45d3-aa31-4c307c620eec/ovsdb-server-init/0.log" Nov 21 22:23:47 crc kubenswrapper[4727]: I1121 22:23:47.458436 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w77gs_f3f53875-1801-45d3-aa31-4c307c620eec/ovs-vswitchd/0.log" Nov 21 22:23:47 crc kubenswrapper[4727]: I1121 22:23:47.485243 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w77gs_f3f53875-1801-45d3-aa31-4c307c620eec/ovsdb-server/0.log" Nov 21 22:23:47 crc kubenswrapper[4727]: I1121 22:23:47.515182 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w77gs_f3f53875-1801-45d3-aa31-4c307c620eec/ovsdb-server-init/0.log" Nov 21 22:23:47 crc kubenswrapper[4727]: I1121 22:23:47.705202 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-7667c_06a92e2a-daad-494e-93df-d9c943c574d3/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:47 crc kubenswrapper[4727]: I1121 22:23:47.895450 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_57623363-7d60-4219-93b9-c8678ed13f8f/openstack-network-exporter/0.log" Nov 21 22:23:47 crc kubenswrapper[4727]: I1121 22:23:47.898495 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_57623363-7d60-4219-93b9-c8678ed13f8f/ovn-northd/0.log" Nov 21 22:23:48 crc kubenswrapper[4727]: I1121 22:23:48.088597 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_40732896-dabf-47ad-bc39-236c51d78ef2/openstack-network-exporter/0.log" Nov 21 22:23:48 crc kubenswrapper[4727]: I1121 22:23:48.092040 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_40732896-dabf-47ad-bc39-236c51d78ef2/ovsdbserver-nb/0.log" Nov 21 22:23:48 crc kubenswrapper[4727]: I1121 22:23:48.267558 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5adec12e-c3fd-4d6a-bf0b-c38ac063f06c/openstack-network-exporter/0.log" Nov 21 22:23:48 crc kubenswrapper[4727]: I1121 22:23:48.272562 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5adec12e-c3fd-4d6a-bf0b-c38ac063f06c/ovsdbserver-sb/0.log" Nov 21 22:23:48 crc kubenswrapper[4727]: I1121 22:23:48.681238 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59895c4888-ffr5c_65240278-1e42-497b-938b-0eca28db9756/placement-api/0.log" Nov 21 22:23:48 crc kubenswrapper[4727]: I1121 22:23:48.722786 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59895c4888-ffr5c_65240278-1e42-497b-938b-0eca28db9756/placement-log/0.log" Nov 21 22:23:48 crc kubenswrapper[4727]: I1121 22:23:48.874360 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6b40e144-3a25-45f2-9ea2-f81e211510e2/nova-metadata-metadata/0.log" Nov 21 22:23:48 crc kubenswrapper[4727]: I1121 22:23:48.896032 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ff5d12dc-e65b-41f0-b29b-5c4eea0fada2/init-config-reloader/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.016757 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ff5d12dc-e65b-41f0-b29b-5c4eea0fada2/init-config-reloader/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.026416 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ff5d12dc-e65b-41f0-b29b-5c4eea0fada2/config-reloader/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.150709 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ff5d12dc-e65b-41f0-b29b-5c4eea0fada2/thanos-sidecar/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.153997 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ff5d12dc-e65b-41f0-b29b-5c4eea0fada2/prometheus/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.296467 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf901ecd-b37c-4f57-9c80-863e2d949f5f/setup-container/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.424214 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf901ecd-b37c-4f57-9c80-863e2d949f5f/setup-container/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.504904 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf901ecd-b37c-4f57-9c80-863e2d949f5f/rabbitmq/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.537249 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_13e6ebe1-eaee-49f1-9b47-6ec82055a8b6/setup-container/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.712213 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_13e6ebe1-eaee-49f1-9b47-6ec82055a8b6/setup-container/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.747658 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-trxzs_3dd3247d-dccc-453a-8bf5-d967039f82e5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.779636 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_13e6ebe1-eaee-49f1-9b47-6ec82055a8b6/rabbitmq/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.989467 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-9ll44_88e243c4-68b1-4319-8736-a5c650da03a0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:49 crc kubenswrapper[4727]: I1121 22:23:49.992060 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-85dv7_2f6befa2-b5d0-4390-885e-d332d9e51444/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:50 crc kubenswrapper[4727]: I1121 22:23:50.202444 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-79n78_7ce3b44d-00b3-4cf9-bc3f-325ffb4df768/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:50 crc kubenswrapper[4727]: I1121 22:23:50.219583 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-fwp4w_0aa83d35-00bc-4acd-ab0f-1c153bd7130b/ssh-known-hosts-edpm-deployment/0.log" Nov 21 22:23:50 crc kubenswrapper[4727]: I1121 22:23:50.474860 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-9c89d96c5-h6lxw_d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8/proxy-server/0.log" Nov 21 22:23:50 crc kubenswrapper[4727]: I1121 22:23:50.743248 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vrqcf_bf989668-3fb7-491d-abf0-e2991e327690/swift-ring-rebalance/0.log" Nov 21 22:23:50 crc kubenswrapper[4727]: I1121 22:23:50.828640 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-9c89d96c5-h6lxw_d2ef4264-0e20-4b0d-ac0b-6107eeb60fc8/proxy-httpd/0.log" Nov 21 22:23:50 crc kubenswrapper[4727]: I1121 22:23:50.882278 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/account-auditor/0.log" Nov 21 22:23:50 crc kubenswrapper[4727]: I1121 22:23:50.960085 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/account-reaper/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.095152 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/account-server/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.101195 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/container-auditor/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.112922 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/account-replicator/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.226420 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/container-replicator/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.295884 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/container-updater/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.364469 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/container-server/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.381725 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/object-auditor/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.432267 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/object-expirer/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.542798 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/object-server/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.559684 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/object-replicator/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.624985 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/rsync/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.636449 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/object-updater/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.703586 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_29faf340-95f4-4bd3-bd87-f2e971a0e494/swift-recon-cron/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.893123 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-9kf4v_43b699c5-7ed9-4603-aa3c-a8d1092571ff/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:51 crc kubenswrapper[4727]: I1121 22:23:51.953775 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-ks88m_98365c22-40a3-40cd-95e7-8b7ff8e27c2f/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:52 crc kubenswrapper[4727]: I1121 22:23:52.144204 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b32af989-be86-463d-84c6-55a06a44b77d/test-operator-logs-container/0.log" Nov 21 22:23:52 crc kubenswrapper[4727]: I1121 22:23:52.333998 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-f8g4k_e6316777-a5fe-46c7-9c41-5a0bb55f38d7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 22:23:54 crc kubenswrapper[4727]: I1121 22:23:54.309207 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9523e234-9365-483e-8548-43a9c312692e/tempest-tests-tempest-tests-runner/0.log" Nov 21 22:24:09 crc kubenswrapper[4727]: I1121 22:24:09.161824 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b2cd143a-46bd-4409-b5e9-91c1cb00e378/memcached/0.log" Nov 21 22:24:13 crc kubenswrapper[4727]: I1121 22:24:13.335583 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:24:13 crc kubenswrapper[4727]: I1121 22:24:13.336412 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.045435 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw_b1736fb1-f2c3-482b-ad32-1c7a3be3fbac/util/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.206372 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw_b1736fb1-f2c3-482b-ad32-1c7a3be3fbac/util/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.218144 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw_b1736fb1-f2c3-482b-ad32-1c7a3be3fbac/pull/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.224850 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw_b1736fb1-f2c3-482b-ad32-1c7a3be3fbac/pull/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.392768 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw_b1736fb1-f2c3-482b-ad32-1c7a3be3fbac/util/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.423699 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw_b1736fb1-f2c3-482b-ad32-1c7a3be3fbac/pull/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.460227 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b39651f518b4f0a06d1361a60d9a88984523a98ab203a6e21ce78843e1296mw_b1736fb1-f2c3-482b-ad32-1c7a3be3fbac/extract/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.556369 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-5lhbb_2413dc73-9b93-4670-ae64-86d116672e3c/kube-rbac-proxy/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.658223 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-5lhbb_2413dc73-9b93-4670-ae64-86d116672e3c/manager/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.691579 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-bcf5p_3a00eb31-9565-4aad-bcea-bb52bc5bebdc/kube-rbac-proxy/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.812851 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-bcf5p_3a00eb31-9565-4aad-bcea-bb52bc5bebdc/manager/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.885985 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-gwm4n_644c8e80-e81c-45f1-b5f8-d561cccc89cb/manager/0.log" Nov 21 22:24:21 crc kubenswrapper[4727]: I1121 22:24:21.939119 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-gwm4n_644c8e80-e81c-45f1-b5f8-d561cccc89cb/kube-rbac-proxy/0.log" Nov 21 22:24:22 crc kubenswrapper[4727]: I1121 22:24:22.048324 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-4dtbh_61cc6991-6299-4ec0-b51c-6130428804ac/kube-rbac-proxy/0.log" Nov 21 22:24:22 crc kubenswrapper[4727]: I1121 22:24:22.181323 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-4dtbh_61cc6991-6299-4ec0-b51c-6130428804ac/manager/0.log" Nov 21 22:24:22 crc kubenswrapper[4727]: I1121 22:24:22.250504 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-9g7x9_c1898484-e606-4222-9cd0-221a437d815c/kube-rbac-proxy/0.log" Nov 21 22:24:22 crc kubenswrapper[4727]: I1121 22:24:22.306269 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-9g7x9_c1898484-e606-4222-9cd0-221a437d815c/manager/0.log" Nov 21 22:24:22 crc kubenswrapper[4727]: I1121 22:24:22.392887 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-rk7gg_c5f9fd34-f218-4a3e-9456-fcd46dafb4ad/kube-rbac-proxy/0.log" Nov 21 22:24:22 crc kubenswrapper[4727]: I1121 22:24:22.509026 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-rk7gg_c5f9fd34-f218-4a3e-9456-fcd46dafb4ad/manager/0.log" Nov 21 22:24:22 crc kubenswrapper[4727]: I1121 22:24:22.551409 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-w9lxm_bb18a2fd-7fdc-4bc7-94b4-7da1dd327817/kube-rbac-proxy/0.log" Nov 21 22:24:22 crc kubenswrapper[4727]: I1121 22:24:22.918813 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-xm2x9_ff8b3ca8-d78f-448b-ace0-37bcc5408daf/kube-rbac-proxy/0.log" Nov 21 22:24:22 crc kubenswrapper[4727]: I1121 22:24:22.938215 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-xm2x9_ff8b3ca8-d78f-448b-ace0-37bcc5408daf/manager/0.log" Nov 21 22:24:22 crc kubenswrapper[4727]: I1121 22:24:22.949843 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-w9lxm_bb18a2fd-7fdc-4bc7-94b4-7da1dd327817/manager/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.112921 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-tlj2c_70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8/kube-rbac-proxy/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.249307 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-tlj2c_70eeb4c1-c8b3-4f4d-b4d6-4e8ddd50f2a8/manager/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.309763 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-r4gvw_4ac64528-2d7a-466d-a0f9-5433cc0a482d/manager/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.331377 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-r4gvw_4ac64528-2d7a-466d-a0f9-5433cc0a482d/kube-rbac-proxy/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.437648 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-jxb65_4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf/kube-rbac-proxy/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.534837 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-jxb65_4ad6d8f3-0b4b-48bd-8cd7-c3fd216509bf/manager/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.659406 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-frkcb_77869999-156b-4be3-a845-46914c095836/kube-rbac-proxy/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.672915 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-frkcb_77869999-156b-4be3-a845-46914c095836/manager/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.753867 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-h792g_e89e3523-4eb5-4ea3-997f-132e08f6ef4d/kube-rbac-proxy/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.905894 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-h792g_e89e3523-4eb5-4ea3-997f-132e08f6ef4d/manager/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.952532 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-649jq_8a5a299a-95be-4bfa-b3f4-c5a8bf71667a/kube-rbac-proxy/0.log" Nov 21 22:24:23 crc kubenswrapper[4727]: I1121 22:24:23.994993 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-649jq_8a5a299a-95be-4bfa-b3f4-c5a8bf71667a/manager/0.log" Nov 21 22:24:24 crc kubenswrapper[4727]: I1121 22:24:24.111291 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-57mrl_8cbf6c6b-39fa-42bd-a722-0145c725d4cf/kube-rbac-proxy/0.log" Nov 21 22:24:24 crc kubenswrapper[4727]: I1121 22:24:24.119864 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-57mrl_8cbf6c6b-39fa-42bd-a722-0145c725d4cf/manager/0.log" Nov 21 22:24:24 crc kubenswrapper[4727]: I1121 22:24:24.269337 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-85c6dbf684-4l2bg_bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf/kube-rbac-proxy/0.log" Nov 21 22:24:24 crc kubenswrapper[4727]: I1121 22:24:24.398976 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-698b9558cf-j7t2w_69fc4f58-1b30-4ab8-911d-3b79b0fc149f/kube-rbac-proxy/0.log" Nov 21 22:24:24 crc kubenswrapper[4727]: I1121 22:24:24.644939 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fs7lr_f726d023-7d18-4472-88ac-f9d99c7e1279/registry-server/0.log" Nov 21 22:24:24 crc kubenswrapper[4727]: I1121 22:24:24.750984 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-698b9558cf-j7t2w_69fc4f58-1b30-4ab8-911d-3b79b0fc149f/operator/0.log" Nov 21 22:24:24 crc kubenswrapper[4727]: I1121 22:24:24.901411 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-7j4bn_ac4c617b-d373-46ed-97e7-a8b1b8dae48b/kube-rbac-proxy/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.047116 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-dhl6q_9af4ba92-9c28-4836-94ab-bf82bbf14047/kube-rbac-proxy/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.056552 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-7j4bn_ac4c617b-d373-46ed-97e7-a8b1b8dae48b/manager/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.187946 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-dhl6q_9af4ba92-9c28-4836-94ab-bf82bbf14047/manager/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.274527 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-z2t47_afbeb30f-0a67-4547-85e5-76cba72ee577/operator/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.418357 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-d7cz4_6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa/kube-rbac-proxy/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.462731 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-d7cz4_6e0f080f-5dac-4a2b-bad9-dd3d9d8ebefa/manager/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.535300 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-776668cd95-zhcl9_2968b604-6efc-421c-8435-12cc7303a604/kube-rbac-proxy/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.587011 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-85c6dbf684-4l2bg_bf6e47a4-6e9a-4c9c-b14c-72a8e7f9f5bf/manager/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.766870 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-bn475_7090ae5b-e03d-470b-b87e-31623d9916b7/manager/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.783494 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-bn475_7090ae5b-e03d-470b-b87e-31623d9916b7/kube-rbac-proxy/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.863881 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-776668cd95-zhcl9_2968b604-6efc-421c-8435-12cc7303a604/manager/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.937922 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-gkz28_db9d7c73-bf27-4b19-8aa6-d0c006a74309/kube-rbac-proxy/0.log" Nov 21 22:24:25 crc kubenswrapper[4727]: I1121 22:24:25.945653 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-gkz28_db9d7c73-bf27-4b19-8aa6-d0c006a74309/manager/0.log" Nov 21 22:24:43 crc kubenswrapper[4727]: I1121 22:24:43.335164 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:24:43 crc kubenswrapper[4727]: I1121 22:24:43.335599 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:24:43 crc kubenswrapper[4727]: I1121 22:24:43.470286 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vhwbd_3ed0ff62-2542-410b-ac29-904eb08bef16/control-plane-machine-set-operator/0.log" Nov 21 22:24:43 crc kubenswrapper[4727]: I1121 22:24:43.686029 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fwqq7_58dc7f0b-2626-478d-a541-511adc47db56/kube-rbac-proxy/0.log" Nov 21 22:24:43 crc kubenswrapper[4727]: I1121 22:24:43.694420 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fwqq7_58dc7f0b-2626-478d-a541-511adc47db56/machine-api-operator/0.log" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.173337 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ldckj"] Nov 21 22:24:49 crc kubenswrapper[4727]: E1121 22:24:49.174667 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b692124-5248-4263-9c61-0fc4b0df7fde" containerName="container-00" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.174684 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b692124-5248-4263-9c61-0fc4b0df7fde" containerName="container-00" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.175087 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b692124-5248-4263-9c61-0fc4b0df7fde" containerName="container-00" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.179430 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.191446 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ldckj"] Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.277757 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-utilities\") pod \"community-operators-ldckj\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.278049 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-catalog-content\") pod \"community-operators-ldckj\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.278102 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxfnm\" (UniqueName: \"kubernetes.io/projected/a5bc5ae1-0334-4334-bc07-a4d45949d160-kube-api-access-dxfnm\") pod \"community-operators-ldckj\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.380512 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-utilities\") pod \"community-operators-ldckj\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.380563 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-catalog-content\") pod \"community-operators-ldckj\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.380604 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxfnm\" (UniqueName: \"kubernetes.io/projected/a5bc5ae1-0334-4334-bc07-a4d45949d160-kube-api-access-dxfnm\") pod \"community-operators-ldckj\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.381054 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-utilities\") pod \"community-operators-ldckj\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.381144 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-catalog-content\") pod \"community-operators-ldckj\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.403148 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxfnm\" (UniqueName: \"kubernetes.io/projected/a5bc5ae1-0334-4334-bc07-a4d45949d160-kube-api-access-dxfnm\") pod \"community-operators-ldckj\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:49 crc kubenswrapper[4727]: I1121 22:24:49.509572 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:50 crc kubenswrapper[4727]: I1121 22:24:50.139990 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ldckj"] Nov 21 22:24:50 crc kubenswrapper[4727]: W1121 22:24:50.143487 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5bc5ae1_0334_4334_bc07_a4d45949d160.slice/crio-0309fc1c8066066b886299c4942493ce6b6dd04af986630fdeafb9d2ab938cd7 WatchSource:0}: Error finding container 0309fc1c8066066b886299c4942493ce6b6dd04af986630fdeafb9d2ab938cd7: Status 404 returned error can't find the container with id 0309fc1c8066066b886299c4942493ce6b6dd04af986630fdeafb9d2ab938cd7 Nov 21 22:24:50 crc kubenswrapper[4727]: I1121 22:24:50.910987 4727 generic.go:334] "Generic (PLEG): container finished" podID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerID="360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053" exitCode=0 Nov 21 22:24:50 crc kubenswrapper[4727]: I1121 22:24:50.911327 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldckj" event={"ID":"a5bc5ae1-0334-4334-bc07-a4d45949d160","Type":"ContainerDied","Data":"360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053"} Nov 21 22:24:50 crc kubenswrapper[4727]: I1121 22:24:50.911358 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldckj" event={"ID":"a5bc5ae1-0334-4334-bc07-a4d45949d160","Type":"ContainerStarted","Data":"0309fc1c8066066b886299c4942493ce6b6dd04af986630fdeafb9d2ab938cd7"} Nov 21 22:24:51 crc kubenswrapper[4727]: I1121 22:24:51.934809 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldckj" event={"ID":"a5bc5ae1-0334-4334-bc07-a4d45949d160","Type":"ContainerStarted","Data":"36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2"} Nov 21 22:24:53 crc kubenswrapper[4727]: I1121 22:24:53.963166 4727 generic.go:334] "Generic (PLEG): container finished" podID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerID="36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2" exitCode=0 Nov 21 22:24:53 crc kubenswrapper[4727]: I1121 22:24:53.963246 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldckj" event={"ID":"a5bc5ae1-0334-4334-bc07-a4d45949d160","Type":"ContainerDied","Data":"36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2"} Nov 21 22:24:54 crc kubenswrapper[4727]: I1121 22:24:54.982623 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldckj" event={"ID":"a5bc5ae1-0334-4334-bc07-a4d45949d160","Type":"ContainerStarted","Data":"bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b"} Nov 21 22:24:55 crc kubenswrapper[4727]: I1121 22:24:55.008006 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ldckj" podStartSLOduration=2.499515602 podStartE2EDuration="6.007988948s" podCreationTimestamp="2025-11-21 22:24:49 +0000 UTC" firstStartedPulling="2025-11-21 22:24:50.914374836 +0000 UTC m=+8296.100559890" lastFinishedPulling="2025-11-21 22:24:54.422848182 +0000 UTC m=+8299.609033236" observedRunningTime="2025-11-21 22:24:55.005568459 +0000 UTC m=+8300.191753503" watchObservedRunningTime="2025-11-21 22:24:55.007988948 +0000 UTC m=+8300.194173992" Nov 21 22:24:57 crc kubenswrapper[4727]: I1121 22:24:57.250725 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-dbtqb_73762fdf-4a73-497f-b183-fb15b1c7e8b5/cert-manager-controller/0.log" Nov 21 22:24:57 crc kubenswrapper[4727]: I1121 22:24:57.343208 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-7mdx6_b2bd8576-daa6-4408-a4e1-e9b4824db5ff/cert-manager-cainjector/0.log" Nov 21 22:24:57 crc kubenswrapper[4727]: I1121 22:24:57.407285 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-rttbv_5ec4f5f1-9157-49cf-bbea-d8d215df5440/cert-manager-webhook/0.log" Nov 21 22:24:59 crc kubenswrapper[4727]: I1121 22:24:59.521578 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:59 crc kubenswrapper[4727]: I1121 22:24:59.522244 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:24:59 crc kubenswrapper[4727]: I1121 22:24:59.588290 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:25:00 crc kubenswrapper[4727]: I1121 22:25:00.108245 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:25:00 crc kubenswrapper[4727]: I1121 22:25:00.159447 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ldckj"] Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.073629 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ldckj" podUID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerName="registry-server" containerID="cri-o://bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b" gracePeriod=2 Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.638166 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.696484 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-catalog-content\") pod \"a5bc5ae1-0334-4334-bc07-a4d45949d160\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.696632 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-utilities\") pod \"a5bc5ae1-0334-4334-bc07-a4d45949d160\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.696699 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxfnm\" (UniqueName: \"kubernetes.io/projected/a5bc5ae1-0334-4334-bc07-a4d45949d160-kube-api-access-dxfnm\") pod \"a5bc5ae1-0334-4334-bc07-a4d45949d160\" (UID: \"a5bc5ae1-0334-4334-bc07-a4d45949d160\") " Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.697702 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-utilities" (OuterVolumeSpecName: "utilities") pod "a5bc5ae1-0334-4334-bc07-a4d45949d160" (UID: "a5bc5ae1-0334-4334-bc07-a4d45949d160"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.708141 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5bc5ae1-0334-4334-bc07-a4d45949d160-kube-api-access-dxfnm" (OuterVolumeSpecName: "kube-api-access-dxfnm") pod "a5bc5ae1-0334-4334-bc07-a4d45949d160" (UID: "a5bc5ae1-0334-4334-bc07-a4d45949d160"). InnerVolumeSpecName "kube-api-access-dxfnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.770182 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5bc5ae1-0334-4334-bc07-a4d45949d160" (UID: "a5bc5ae1-0334-4334-bc07-a4d45949d160"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.799510 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.799555 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5bc5ae1-0334-4334-bc07-a4d45949d160-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:25:02 crc kubenswrapper[4727]: I1121 22:25:02.799570 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxfnm\" (UniqueName: \"kubernetes.io/projected/a5bc5ae1-0334-4334-bc07-a4d45949d160-kube-api-access-dxfnm\") on node \"crc\" DevicePath \"\"" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.085045 4727 generic.go:334] "Generic (PLEG): container finished" podID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerID="bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b" exitCode=0 Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.085121 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldckj" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.085127 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldckj" event={"ID":"a5bc5ae1-0334-4334-bc07-a4d45949d160","Type":"ContainerDied","Data":"bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b"} Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.086312 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldckj" event={"ID":"a5bc5ae1-0334-4334-bc07-a4d45949d160","Type":"ContainerDied","Data":"0309fc1c8066066b886299c4942493ce6b6dd04af986630fdeafb9d2ab938cd7"} Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.086351 4727 scope.go:117] "RemoveContainer" containerID="bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.108816 4727 scope.go:117] "RemoveContainer" containerID="36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.125512 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ldckj"] Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.134918 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ldckj"] Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.139359 4727 scope.go:117] "RemoveContainer" containerID="360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.195648 4727 scope.go:117] "RemoveContainer" containerID="bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b" Nov 21 22:25:03 crc kubenswrapper[4727]: E1121 22:25:03.196071 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b\": container with ID starting with bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b not found: ID does not exist" containerID="bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.196116 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b"} err="failed to get container status \"bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b\": rpc error: code = NotFound desc = could not find container \"bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b\": container with ID starting with bb5ceaee2b7d05534582803a05ad2a31c0474a33b9b1634dbfd6deeaa399695b not found: ID does not exist" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.196144 4727 scope.go:117] "RemoveContainer" containerID="36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2" Nov 21 22:25:03 crc kubenswrapper[4727]: E1121 22:25:03.196518 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2\": container with ID starting with 36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2 not found: ID does not exist" containerID="36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.196566 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2"} err="failed to get container status \"36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2\": rpc error: code = NotFound desc = could not find container \"36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2\": container with ID starting with 36388867ce434eed33858087013b47b3fd315a3d8033f424da785964314353d2 not found: ID does not exist" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.196595 4727 scope.go:117] "RemoveContainer" containerID="360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053" Nov 21 22:25:03 crc kubenswrapper[4727]: E1121 22:25:03.196882 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053\": container with ID starting with 360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053 not found: ID does not exist" containerID="360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.197003 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053"} err="failed to get container status \"360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053\": rpc error: code = NotFound desc = could not find container \"360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053\": container with ID starting with 360a0f038cbed3a3d6b6a3f0561ad7344674c04456b6be5bf7c8b03ff8fef053 not found: ID does not exist" Nov 21 22:25:03 crc kubenswrapper[4727]: I1121 22:25:03.515105 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5bc5ae1-0334-4334-bc07-a4d45949d160" path="/var/lib/kubelet/pods/a5bc5ae1-0334-4334-bc07-a4d45949d160/volumes" Nov 21 22:25:10 crc kubenswrapper[4727]: I1121 22:25:10.578513 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-8x69n_12ebd943-fc8a-44a8-b99d-4629bbf01d9f/nmstate-console-plugin/0.log" Nov 21 22:25:10 crc kubenswrapper[4727]: I1121 22:25:10.760057 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b5lmb_19904852-2792-4d5c-92e0-b304589c1eb8/nmstate-handler/0.log" Nov 21 22:25:10 crc kubenswrapper[4727]: I1121 22:25:10.808646 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-blwbm_80103952-99f2-44aa-b4e0-17e3329d39b5/kube-rbac-proxy/0.log" Nov 21 22:25:10 crc kubenswrapper[4727]: I1121 22:25:10.838760 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-blwbm_80103952-99f2-44aa-b4e0-17e3329d39b5/nmstate-metrics/0.log" Nov 21 22:25:10 crc kubenswrapper[4727]: I1121 22:25:10.961082 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-4djhq_7be0d584-a880-410f-9592-8020cd27eb60/nmstate-operator/0.log" Nov 21 22:25:11 crc kubenswrapper[4727]: I1121 22:25:11.022245 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-xbm77_28847e06-d0f5-423c-9b74-73ad939413ec/nmstate-webhook/0.log" Nov 21 22:25:13 crc kubenswrapper[4727]: I1121 22:25:13.336077 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:25:13 crc kubenswrapper[4727]: I1121 22:25:13.336474 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:25:13 crc kubenswrapper[4727]: I1121 22:25:13.336575 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 22:25:13 crc kubenswrapper[4727]: I1121 22:25:13.337665 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e632dbef58939d9e72ed84441bc629b2c51b72220ef130e0c351e2c9205f8637"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 22:25:13 crc kubenswrapper[4727]: I1121 22:25:13.337756 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://e632dbef58939d9e72ed84441bc629b2c51b72220ef130e0c351e2c9205f8637" gracePeriod=600 Nov 21 22:25:14 crc kubenswrapper[4727]: I1121 22:25:14.225817 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="e632dbef58939d9e72ed84441bc629b2c51b72220ef130e0c351e2c9205f8637" exitCode=0 Nov 21 22:25:14 crc kubenswrapper[4727]: I1121 22:25:14.226710 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"e632dbef58939d9e72ed84441bc629b2c51b72220ef130e0c351e2c9205f8637"} Nov 21 22:25:14 crc kubenswrapper[4727]: I1121 22:25:14.226779 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a"} Nov 21 22:25:14 crc kubenswrapper[4727]: I1121 22:25:14.227020 4727 scope.go:117] "RemoveContainer" containerID="5173c9ead4dd963bd522a07cca486adefc9357dff12dfeac94051c26a76649cc" Nov 21 22:25:24 crc kubenswrapper[4727]: I1121 22:25:24.634133 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-74d98576bd-k5q2r_16b45601-b011-407e-bbc5-3f92b770b3a2/kube-rbac-proxy/0.log" Nov 21 22:25:24 crc kubenswrapper[4727]: I1121 22:25:24.668589 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-74d98576bd-k5q2r_16b45601-b011-407e-bbc5-3f92b770b3a2/manager/0.log" Nov 21 22:25:39 crc kubenswrapper[4727]: I1121 22:25:39.708610 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-nlpsm_f920c0a3-f99c-4def-bc43-b4734872bba2/cluster-logging-operator/0.log" Nov 21 22:25:39 crc kubenswrapper[4727]: I1121 22:25:39.805096 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-f4pkc_99320c86-cce9-4dee-988e-adea1021bdbf/collector/0.log" Nov 21 22:25:39 crc kubenswrapper[4727]: I1121 22:25:39.903139 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_f5e392ee-a6ac-435f-92dc-87a6e27bf293/loki-compactor/0.log" Nov 21 22:25:39 crc kubenswrapper[4727]: I1121 22:25:39.990687 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-slmrz_0ca7d843-5fcf-4fcb-b111-9c657f58b54f/loki-distributor/0.log" Nov 21 22:25:40 crc kubenswrapper[4727]: I1121 22:25:40.073986 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-59cfccf4c6-j2284_441ae22e-7af1-4013-90ef-880b7ba0ce0e/gateway/0.log" Nov 21 22:25:40 crc kubenswrapper[4727]: I1121 22:25:40.104668 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-59cfccf4c6-j2284_441ae22e-7af1-4013-90ef-880b7ba0ce0e/opa/0.log" Nov 21 22:25:40 crc kubenswrapper[4727]: I1121 22:25:40.231084 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-59cfccf4c6-mzg5j_0b5a9db0-f734-4201-87a8-60f0bcbb14ec/gateway/0.log" Nov 21 22:25:40 crc kubenswrapper[4727]: I1121 22:25:40.260511 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-59cfccf4c6-mzg5j_0b5a9db0-f734-4201-87a8-60f0bcbb14ec/opa/0.log" Nov 21 22:25:40 crc kubenswrapper[4727]: I1121 22:25:40.402554 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_315b3e23-db88-454e-a80c-66f53fbe1c5b/loki-index-gateway/0.log" Nov 21 22:25:40 crc kubenswrapper[4727]: I1121 22:25:40.536557 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_fc313c47-e815-4c94-b46b-51876e49ec0a/loki-ingester/0.log" Nov 21 22:25:40 crc kubenswrapper[4727]: I1121 22:25:40.626409 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-n85sz_2bfd5755-ad8a-47da-86b7-020881abeeec/loki-querier/0.log" Nov 21 22:25:40 crc kubenswrapper[4727]: I1121 22:25:40.724316 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-spcl5_a36d2c90-8f52-48b2-a6ea-b774e0e7d0a7/loki-query-frontend/0.log" Nov 21 22:25:55 crc kubenswrapper[4727]: I1121 22:25:55.590173 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-zjjlc_4fb283b2-30a9-4708-85e5-3e062e8d3ac5/kube-rbac-proxy/0.log" Nov 21 22:25:55 crc kubenswrapper[4727]: I1121 22:25:55.769206 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-zjjlc_4fb283b2-30a9-4708-85e5-3e062e8d3ac5/controller/0.log" Nov 21 22:25:55 crc kubenswrapper[4727]: I1121 22:25:55.862377 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-frr-files/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.018539 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-frr-files/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.023697 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-reloader/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.048354 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-reloader/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.062569 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-metrics/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.207579 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-frr-files/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.268191 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-reloader/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.283257 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-metrics/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.290582 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-metrics/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.439566 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-frr-files/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.447749 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-reloader/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.468451 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/cp-metrics/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.480540 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/controller/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.660804 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/kube-rbac-proxy/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.664131 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/frr-metrics/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.664541 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/kube-rbac-proxy-frr/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.859404 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/reloader/0.log" Nov 21 22:25:56 crc kubenswrapper[4727]: I1121 22:25:56.894313 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-2nz5j_330841f1-7983-496c-8fd8-b1f2aa8f286f/frr-k8s-webhook-server/0.log" Nov 21 22:25:57 crc kubenswrapper[4727]: I1121 22:25:57.112921 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d55dd5c97-pxmw2_6764728f-f2fe-4017-99bf-6278910f9fc8/manager/0.log" Nov 21 22:25:57 crc kubenswrapper[4727]: I1121 22:25:57.323421 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-795fdb5c8f-4ffhh_aee0f3a1-6283-4b18-ad30-00cae510da18/webhook-server/0.log" Nov 21 22:25:57 crc kubenswrapper[4727]: I1121 22:25:57.373055 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pf24d_bbf29db0-6bed-47d7-af92-d4ab37fb4909/kube-rbac-proxy/0.log" Nov 21 22:25:58 crc kubenswrapper[4727]: I1121 22:25:58.133236 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pf24d_bbf29db0-6bed-47d7-af92-d4ab37fb4909/speaker/0.log" Nov 21 22:25:58 crc kubenswrapper[4727]: I1121 22:25:58.482697 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv4hs_81eccf53-ca42-4d0f-b967-1da15a5d817d/frr/0.log" Nov 21 22:26:11 crc kubenswrapper[4727]: I1121 22:26:11.624827 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr_4944f157-e2ee-453c-bac8-aee27615a833/util/0.log" Nov 21 22:26:11 crc kubenswrapper[4727]: I1121 22:26:11.809218 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr_4944f157-e2ee-453c-bac8-aee27615a833/util/0.log" Nov 21 22:26:11 crc kubenswrapper[4727]: I1121 22:26:11.841331 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr_4944f157-e2ee-453c-bac8-aee27615a833/pull/0.log" Nov 21 22:26:11 crc kubenswrapper[4727]: I1121 22:26:11.841494 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr_4944f157-e2ee-453c-bac8-aee27615a833/pull/0.log" Nov 21 22:26:11 crc kubenswrapper[4727]: I1121 22:26:11.975795 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr_4944f157-e2ee-453c-bac8-aee27615a833/util/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.023495 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr_4944f157-e2ee-453c-bac8-aee27615a833/pull/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.029039 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8h5fhr_4944f157-e2ee-453c-bac8-aee27615a833/extract/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.153598 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c_a5e3b453-04ec-438a-afb8-e162baa32e8d/util/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.348412 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c_a5e3b453-04ec-438a-afb8-e162baa32e8d/util/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.373041 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c_a5e3b453-04ec-438a-afb8-e162baa32e8d/pull/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.374373 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c_a5e3b453-04ec-438a-afb8-e162baa32e8d/pull/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.545564 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c_a5e3b453-04ec-438a-afb8-e162baa32e8d/pull/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.559848 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c_a5e3b453-04ec-438a-afb8-e162baa32e8d/util/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.569780 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edfp6c_a5e3b453-04ec-438a-afb8-e162baa32e8d/extract/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.697053 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh_8ab1bf80-b740-45ee-b703-4190578fcf3e/util/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.883274 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh_8ab1bf80-b740-45ee-b703-4190578fcf3e/pull/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.898239 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh_8ab1bf80-b740-45ee-b703-4190578fcf3e/util/0.log" Nov 21 22:26:12 crc kubenswrapper[4727]: I1121 22:26:12.927693 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh_8ab1bf80-b740-45ee-b703-4190578fcf3e/pull/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.065821 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh_8ab1bf80-b740-45ee-b703-4190578fcf3e/util/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.072749 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh_8ab1bf80-b740-45ee-b703-4190578fcf3e/pull/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.075214 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210b5lxh_8ab1bf80-b740-45ee-b703-4190578fcf3e/extract/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.219117 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz_d9e94391-8dbd-4d4b-b330-dd6fc13f91c4/util/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.429790 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz_d9e94391-8dbd-4d4b-b330-dd6fc13f91c4/util/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.441534 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz_d9e94391-8dbd-4d4b-b330-dd6fc13f91c4/pull/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.485237 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz_d9e94391-8dbd-4d4b-b330-dd6fc13f91c4/pull/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.655395 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz_d9e94391-8dbd-4d4b-b330-dd6fc13f91c4/util/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.656825 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz_d9e94391-8dbd-4d4b-b330-dd6fc13f91c4/extract/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.701449 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fqpfxz_d9e94391-8dbd-4d4b-b330-dd6fc13f91c4/pull/0.log" Nov 21 22:26:13 crc kubenswrapper[4727]: I1121 22:26:13.841766 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n8vql_7cbf0484-db14-4c19-8944-5da1652fe052/extract-utilities/0.log" Nov 21 22:26:14 crc kubenswrapper[4727]: I1121 22:26:14.031090 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n8vql_7cbf0484-db14-4c19-8944-5da1652fe052/extract-content/0.log" Nov 21 22:26:14 crc kubenswrapper[4727]: I1121 22:26:14.048114 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n8vql_7cbf0484-db14-4c19-8944-5da1652fe052/extract-utilities/0.log" Nov 21 22:26:14 crc kubenswrapper[4727]: I1121 22:26:14.081831 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n8vql_7cbf0484-db14-4c19-8944-5da1652fe052/extract-content/0.log" Nov 21 22:26:14 crc kubenswrapper[4727]: I1121 22:26:14.216261 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n8vql_7cbf0484-db14-4c19-8944-5da1652fe052/extract-content/0.log" Nov 21 22:26:14 crc kubenswrapper[4727]: I1121 22:26:14.256756 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n8vql_7cbf0484-db14-4c19-8944-5da1652fe052/extract-utilities/0.log" Nov 21 22:26:14 crc kubenswrapper[4727]: I1121 22:26:14.531549 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqhv2_552d0b06-98da-4bb3-b86d-4a3ac341ad99/extract-utilities/0.log" Nov 21 22:26:14 crc kubenswrapper[4727]: I1121 22:26:14.781111 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqhv2_552d0b06-98da-4bb3-b86d-4a3ac341ad99/extract-content/0.log" Nov 21 22:26:14 crc kubenswrapper[4727]: I1121 22:26:14.789457 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqhv2_552d0b06-98da-4bb3-b86d-4a3ac341ad99/extract-utilities/0.log" Nov 21 22:26:14 crc kubenswrapper[4727]: I1121 22:26:14.807503 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqhv2_552d0b06-98da-4bb3-b86d-4a3ac341ad99/extract-content/0.log" Nov 21 22:26:14 crc kubenswrapper[4727]: I1121 22:26:14.999475 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqhv2_552d0b06-98da-4bb3-b86d-4a3ac341ad99/extract-utilities/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.081461 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqhv2_552d0b06-98da-4bb3-b86d-4a3ac341ad99/extract-content/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.200731 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n8vql_7cbf0484-db14-4c19-8944-5da1652fe052/registry-server/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.212048 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx_1afc1382-b90a-439e-bd7f-3ee0d42a547f/util/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.351542 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx_1afc1382-b90a-439e-bd7f-3ee0d42a547f/pull/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.371719 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx_1afc1382-b90a-439e-bd7f-3ee0d42a547f/util/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.401356 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx_1afc1382-b90a-439e-bd7f-3ee0d42a547f/pull/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.600058 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx_1afc1382-b90a-439e-bd7f-3ee0d42a547f/pull/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.643950 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx_1afc1382-b90a-439e-bd7f-3ee0d42a547f/extract/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.650395 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6wrhmx_1afc1382-b90a-439e-bd7f-3ee0d42a547f/util/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.862687 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzkrs_8370af5d-5665-4627-98b2-f0df95797a4f/extract-utilities/0.log" Nov 21 22:26:15 crc kubenswrapper[4727]: I1121 22:26:15.870341 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lkvmp_ba6217c1-bde3-455b-a45d-bcf8001b7a16/marketplace-operator/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.099567 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mqhv2_552d0b06-98da-4bb3-b86d-4a3ac341ad99/registry-server/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.109784 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzkrs_8370af5d-5665-4627-98b2-f0df95797a4f/extract-utilities/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.134688 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzkrs_8370af5d-5665-4627-98b2-f0df95797a4f/extract-content/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.152378 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzkrs_8370af5d-5665-4627-98b2-f0df95797a4f/extract-content/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.252817 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzkrs_8370af5d-5665-4627-98b2-f0df95797a4f/extract-content/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.266777 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzkrs_8370af5d-5665-4627-98b2-f0df95797a4f/extract-utilities/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.361526 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x9vd7_285954a8-1cba-4390-bf20-4fdf85ba2b48/extract-utilities/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.557706 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x9vd7_285954a8-1cba-4390-bf20-4fdf85ba2b48/extract-utilities/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.585694 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x9vd7_285954a8-1cba-4390-bf20-4fdf85ba2b48/extract-content/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.588039 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x9vd7_285954a8-1cba-4390-bf20-4fdf85ba2b48/extract-content/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.597701 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nzkrs_8370af5d-5665-4627-98b2-f0df95797a4f/registry-server/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.783685 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x9vd7_285954a8-1cba-4390-bf20-4fdf85ba2b48/extract-content/0.log" Nov 21 22:26:16 crc kubenswrapper[4727]: I1121 22:26:16.788301 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x9vd7_285954a8-1cba-4390-bf20-4fdf85ba2b48/extract-utilities/0.log" Nov 21 22:26:17 crc kubenswrapper[4727]: I1121 22:26:17.521145 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x9vd7_285954a8-1cba-4390-bf20-4fdf85ba2b48/registry-server/0.log" Nov 21 22:26:31 crc kubenswrapper[4727]: I1121 22:26:31.010192 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-rxhl2_b7c4b477-dfcc-4cf5-ac76-eef0917d866c/prometheus-operator/0.log" Nov 21 22:26:31 crc kubenswrapper[4727]: I1121 22:26:31.126814 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77844f949f-d9l9z_a9691dad-91e9-4701-bc56-f92b96693c15/prometheus-operator-admission-webhook/0.log" Nov 21 22:26:31 crc kubenswrapper[4727]: I1121 22:26:31.176363 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77844f949f-jx5h8_f2258805-8f2d-44e0-adc5-18e68c485378/prometheus-operator-admission-webhook/0.log" Nov 21 22:26:31 crc kubenswrapper[4727]: I1121 22:26:31.306257 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-8tk4t_2092c64d-e6a4-4d8a-9e92-65ea330e7ef0/operator/0.log" Nov 21 22:26:31 crc kubenswrapper[4727]: I1121 22:26:31.331939 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-pcbt2_a2240b49-b00a-45c4-94fa-3acd3cb0e953/observability-ui-dashboards/0.log" Nov 21 22:26:31 crc kubenswrapper[4727]: I1121 22:26:31.484798 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-c9l74_30b54f9a-8377-4b99-92fe-ccbef59d7c7a/perses-operator/0.log" Nov 21 22:26:44 crc kubenswrapper[4727]: I1121 22:26:44.957763 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-74d98576bd-k5q2r_16b45601-b011-407e-bbc5-3f92b770b3a2/kube-rbac-proxy/0.log" Nov 21 22:26:44 crc kubenswrapper[4727]: I1121 22:26:44.992842 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-74d98576bd-k5q2r_16b45601-b011-407e-bbc5-3f92b770b3a2/manager/0.log" Nov 21 22:27:13 crc kubenswrapper[4727]: I1121 22:27:13.335799 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:27:13 crc kubenswrapper[4727]: I1121 22:27:13.336501 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:27:43 crc kubenswrapper[4727]: I1121 22:27:43.335411 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:27:43 crc kubenswrapper[4727]: I1121 22:27:43.336334 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:28:13 crc kubenswrapper[4727]: I1121 22:28:13.336081 4727 patch_prober.go:28] interesting pod/machine-config-daemon-5k2kk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 22:28:13 crc kubenswrapper[4727]: I1121 22:28:13.336708 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 22:28:13 crc kubenswrapper[4727]: I1121 22:28:13.336761 4727 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" Nov 21 22:28:13 crc kubenswrapper[4727]: I1121 22:28:13.337855 4727 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a"} pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 22:28:13 crc kubenswrapper[4727]: I1121 22:28:13.337929 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerName="machine-config-daemon" containerID="cri-o://e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" gracePeriod=600 Nov 21 22:28:13 crc kubenswrapper[4727]: E1121 22:28:13.461798 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:28:14 crc kubenswrapper[4727]: I1121 22:28:14.212692 4727 generic.go:334] "Generic (PLEG): container finished" podID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" exitCode=0 Nov 21 22:28:14 crc kubenswrapper[4727]: I1121 22:28:14.212743 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerDied","Data":"e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a"} Nov 21 22:28:14 crc kubenswrapper[4727]: I1121 22:28:14.212845 4727 scope.go:117] "RemoveContainer" containerID="e632dbef58939d9e72ed84441bc629b2c51b72220ef130e0c351e2c9205f8637" Nov 21 22:28:14 crc kubenswrapper[4727]: I1121 22:28:14.214011 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:28:14 crc kubenswrapper[4727]: E1121 22:28:14.214758 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:28:27 crc kubenswrapper[4727]: I1121 22:28:27.500066 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:28:27 crc kubenswrapper[4727]: E1121 22:28:27.500897 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:28:29 crc kubenswrapper[4727]: I1121 22:28:29.966012 4727 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-24hpt container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 22:28:29 crc kubenswrapper[4727]: I1121 22:28:29.966495 4727 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" podUID="0dbfc629-6fb6-49a0-a834-28ab82b07c75" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 21 22:28:29 crc kubenswrapper[4727]: I1121 22:28:29.978933 4727 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-24hpt container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 22:28:29 crc kubenswrapper[4727]: I1121 22:28:29.979064 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-24hpt" podUID="0dbfc629-6fb6-49a0-a834-28ab82b07c75" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 21 22:28:29 crc kubenswrapper[4727]: I1121 22:28:29.979334 4727 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-bcf5p" podUID="3a00eb31-9565-4aad-bcea-bb52bc5bebdc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 22:28:31 crc kubenswrapper[4727]: I1121 22:28:31.033220 4727 generic.go:334] "Generic (PLEG): container finished" podID="7163d876-0482-4f49-a281-3699fdb0d041" containerID="89344a399a424edc6b5f4abfc8eae2b8a85409c9837b5dff10371a578b855aa4" exitCode=0 Nov 21 22:28:31 crc kubenswrapper[4727]: I1121 22:28:31.033343 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lkcrn/must-gather-w9bms" event={"ID":"7163d876-0482-4f49-a281-3699fdb0d041","Type":"ContainerDied","Data":"89344a399a424edc6b5f4abfc8eae2b8a85409c9837b5dff10371a578b855aa4"} Nov 21 22:28:31 crc kubenswrapper[4727]: I1121 22:28:31.034298 4727 scope.go:117] "RemoveContainer" containerID="89344a399a424edc6b5f4abfc8eae2b8a85409c9837b5dff10371a578b855aa4" Nov 21 22:28:31 crc kubenswrapper[4727]: I1121 22:28:31.189004 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lkcrn_must-gather-w9bms_7163d876-0482-4f49-a281-3699fdb0d041/gather/0.log" Nov 21 22:28:39 crc kubenswrapper[4727]: I1121 22:28:39.032356 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lkcrn/must-gather-w9bms"] Nov 21 22:28:39 crc kubenswrapper[4727]: I1121 22:28:39.033304 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lkcrn/must-gather-w9bms" podUID="7163d876-0482-4f49-a281-3699fdb0d041" containerName="copy" containerID="cri-o://dc98478475acd6b437909546c8b39f212a1bfa0b3c8a1e87f37a5b8cf01675c2" gracePeriod=2 Nov 21 22:28:39 crc kubenswrapper[4727]: I1121 22:28:39.046517 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lkcrn/must-gather-w9bms"] Nov 21 22:28:39 crc kubenswrapper[4727]: I1121 22:28:39.500026 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:28:39 crc kubenswrapper[4727]: E1121 22:28:39.500539 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.139765 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lkcrn_must-gather-w9bms_7163d876-0482-4f49-a281-3699fdb0d041/copy/0.log" Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.140734 4727 generic.go:334] "Generic (PLEG): container finished" podID="7163d876-0482-4f49-a281-3699fdb0d041" containerID="dc98478475acd6b437909546c8b39f212a1bfa0b3c8a1e87f37a5b8cf01675c2" exitCode=143 Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.140797 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35717500d6f47668b5be7c41d493b1f1bfe029928736942b0a9bc1cbc9240618" Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.183998 4727 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lkcrn_must-gather-w9bms_7163d876-0482-4f49-a281-3699fdb0d041/copy/0.log" Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.184498 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/must-gather-w9bms" Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.221437 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7163d876-0482-4f49-a281-3699fdb0d041-must-gather-output\") pod \"7163d876-0482-4f49-a281-3699fdb0d041\" (UID: \"7163d876-0482-4f49-a281-3699fdb0d041\") " Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.221624 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq7wl\" (UniqueName: \"kubernetes.io/projected/7163d876-0482-4f49-a281-3699fdb0d041-kube-api-access-hq7wl\") pod \"7163d876-0482-4f49-a281-3699fdb0d041\" (UID: \"7163d876-0482-4f49-a281-3699fdb0d041\") " Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.230332 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7163d876-0482-4f49-a281-3699fdb0d041-kube-api-access-hq7wl" (OuterVolumeSpecName: "kube-api-access-hq7wl") pod "7163d876-0482-4f49-a281-3699fdb0d041" (UID: "7163d876-0482-4f49-a281-3699fdb0d041"). InnerVolumeSpecName "kube-api-access-hq7wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.324172 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq7wl\" (UniqueName: \"kubernetes.io/projected/7163d876-0482-4f49-a281-3699fdb0d041-kube-api-access-hq7wl\") on node \"crc\" DevicePath \"\"" Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.394935 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7163d876-0482-4f49-a281-3699fdb0d041-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7163d876-0482-4f49-a281-3699fdb0d041" (UID: "7163d876-0482-4f49-a281-3699fdb0d041"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:28:40 crc kubenswrapper[4727]: I1121 22:28:40.426707 4727 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7163d876-0482-4f49-a281-3699fdb0d041-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 21 22:28:41 crc kubenswrapper[4727]: I1121 22:28:41.150759 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lkcrn/must-gather-w9bms" Nov 21 22:28:41 crc kubenswrapper[4727]: I1121 22:28:41.511927 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7163d876-0482-4f49-a281-3699fdb0d041" path="/var/lib/kubelet/pods/7163d876-0482-4f49-a281-3699fdb0d041/volumes" Nov 21 22:28:50 crc kubenswrapper[4727]: I1121 22:28:50.504322 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:28:50 crc kubenswrapper[4727]: E1121 22:28:50.506094 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:29:00 crc kubenswrapper[4727]: I1121 22:29:00.838844 4727 scope.go:117] "RemoveContainer" containerID="fae5efa887bb56726e3d36ed62d4378c174101d86ce21bfc647552d4079e0a96" Nov 21 22:29:00 crc kubenswrapper[4727]: I1121 22:29:00.879750 4727 scope.go:117] "RemoveContainer" containerID="dc98478475acd6b437909546c8b39f212a1bfa0b3c8a1e87f37a5b8cf01675c2" Nov 21 22:29:00 crc kubenswrapper[4727]: I1121 22:29:00.928913 4727 scope.go:117] "RemoveContainer" containerID="89344a399a424edc6b5f4abfc8eae2b8a85409c9837b5dff10371a578b855aa4" Nov 21 22:29:05 crc kubenswrapper[4727]: I1121 22:29:05.515226 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:29:05 crc kubenswrapper[4727]: E1121 22:29:05.516121 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:29:19 crc kubenswrapper[4727]: I1121 22:29:19.500328 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:29:19 crc kubenswrapper[4727]: E1121 22:29:19.501119 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:29:30 crc kubenswrapper[4727]: I1121 22:29:30.500368 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:29:30 crc kubenswrapper[4727]: E1121 22:29:30.501564 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:29:41 crc kubenswrapper[4727]: I1121 22:29:41.499806 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:29:41 crc kubenswrapper[4727]: E1121 22:29:41.500722 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:29:52 crc kubenswrapper[4727]: I1121 22:29:52.500921 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:29:52 crc kubenswrapper[4727]: E1121 22:29:52.502352 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.168172 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd"] Nov 21 22:30:00 crc kubenswrapper[4727]: E1121 22:30:00.169206 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7163d876-0482-4f49-a281-3699fdb0d041" containerName="gather" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.169220 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7163d876-0482-4f49-a281-3699fdb0d041" containerName="gather" Nov 21 22:30:00 crc kubenswrapper[4727]: E1121 22:30:00.169242 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerName="extract-content" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.169248 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerName="extract-content" Nov 21 22:30:00 crc kubenswrapper[4727]: E1121 22:30:00.169263 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerName="registry-server" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.169271 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerName="registry-server" Nov 21 22:30:00 crc kubenswrapper[4727]: E1121 22:30:00.169297 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerName="extract-utilities" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.169303 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerName="extract-utilities" Nov 21 22:30:00 crc kubenswrapper[4727]: E1121 22:30:00.169336 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7163d876-0482-4f49-a281-3699fdb0d041" containerName="copy" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.169342 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="7163d876-0482-4f49-a281-3699fdb0d041" containerName="copy" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.169550 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5bc5ae1-0334-4334-bc07-a4d45949d160" containerName="registry-server" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.169568 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7163d876-0482-4f49-a281-3699fdb0d041" containerName="gather" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.169583 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="7163d876-0482-4f49-a281-3699fdb0d041" containerName="copy" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.170477 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.174051 4727 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.174660 4727 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.180283 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd"] Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.260579 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7c5d\" (UniqueName: \"kubernetes.io/projected/10743313-aad3-47ab-9738-4a36dff338d9-kube-api-access-g7c5d\") pod \"collect-profiles-29396070-rlzxd\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.261067 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10743313-aad3-47ab-9738-4a36dff338d9-config-volume\") pod \"collect-profiles-29396070-rlzxd\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.261139 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10743313-aad3-47ab-9738-4a36dff338d9-secret-volume\") pod \"collect-profiles-29396070-rlzxd\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.363983 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7c5d\" (UniqueName: \"kubernetes.io/projected/10743313-aad3-47ab-9738-4a36dff338d9-kube-api-access-g7c5d\") pod \"collect-profiles-29396070-rlzxd\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.364259 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10743313-aad3-47ab-9738-4a36dff338d9-config-volume\") pod \"collect-profiles-29396070-rlzxd\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.364326 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10743313-aad3-47ab-9738-4a36dff338d9-secret-volume\") pod \"collect-profiles-29396070-rlzxd\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.365646 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10743313-aad3-47ab-9738-4a36dff338d9-config-volume\") pod \"collect-profiles-29396070-rlzxd\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.373256 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10743313-aad3-47ab-9738-4a36dff338d9-secret-volume\") pod \"collect-profiles-29396070-rlzxd\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.385115 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7c5d\" (UniqueName: \"kubernetes.io/projected/10743313-aad3-47ab-9738-4a36dff338d9-kube-api-access-g7c5d\") pod \"collect-profiles-29396070-rlzxd\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:00 crc kubenswrapper[4727]: I1121 22:30:00.502264 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:01 crc kubenswrapper[4727]: I1121 22:30:01.024033 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd"] Nov 21 22:30:01 crc kubenswrapper[4727]: I1121 22:30:01.082903 4727 scope.go:117] "RemoveContainer" containerID="61995725acb94adc30e73fb6a5eecbe17f74dd1045e05d656b41910a40f6547e" Nov 21 22:30:01 crc kubenswrapper[4727]: I1121 22:30:01.269481 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" event={"ID":"10743313-aad3-47ab-9738-4a36dff338d9","Type":"ContainerStarted","Data":"0d1e9b3738076b513b4514c79546495e5216f45a14079539404166a2a51f03f2"} Nov 21 22:30:01 crc kubenswrapper[4727]: I1121 22:30:01.269761 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" event={"ID":"10743313-aad3-47ab-9738-4a36dff338d9","Type":"ContainerStarted","Data":"aa43b4df3055f8f0b3735eb24842990df21a302f9fd164a5aa0151815b7c14a4"} Nov 21 22:30:01 crc kubenswrapper[4727]: I1121 22:30:01.293341 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" podStartSLOduration=1.293323996 podStartE2EDuration="1.293323996s" podCreationTimestamp="2025-11-21 22:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 22:30:01.284701209 +0000 UTC m=+8606.470886253" watchObservedRunningTime="2025-11-21 22:30:01.293323996 +0000 UTC m=+8606.479509040" Nov 21 22:30:02 crc kubenswrapper[4727]: I1121 22:30:02.284863 4727 generic.go:334] "Generic (PLEG): container finished" podID="10743313-aad3-47ab-9738-4a36dff338d9" containerID="0d1e9b3738076b513b4514c79546495e5216f45a14079539404166a2a51f03f2" exitCode=0 Nov 21 22:30:02 crc kubenswrapper[4727]: I1121 22:30:02.284930 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" event={"ID":"10743313-aad3-47ab-9738-4a36dff338d9","Type":"ContainerDied","Data":"0d1e9b3738076b513b4514c79546495e5216f45a14079539404166a2a51f03f2"} Nov 21 22:30:03 crc kubenswrapper[4727]: I1121 22:30:03.723534 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:03 crc kubenswrapper[4727]: I1121 22:30:03.860943 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10743313-aad3-47ab-9738-4a36dff338d9-config-volume\") pod \"10743313-aad3-47ab-9738-4a36dff338d9\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " Nov 21 22:30:03 crc kubenswrapper[4727]: I1121 22:30:03.861086 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10743313-aad3-47ab-9738-4a36dff338d9-secret-volume\") pod \"10743313-aad3-47ab-9738-4a36dff338d9\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " Nov 21 22:30:03 crc kubenswrapper[4727]: I1121 22:30:03.861332 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7c5d\" (UniqueName: \"kubernetes.io/projected/10743313-aad3-47ab-9738-4a36dff338d9-kube-api-access-g7c5d\") pod \"10743313-aad3-47ab-9738-4a36dff338d9\" (UID: \"10743313-aad3-47ab-9738-4a36dff338d9\") " Nov 21 22:30:03 crc kubenswrapper[4727]: I1121 22:30:03.861656 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10743313-aad3-47ab-9738-4a36dff338d9-config-volume" (OuterVolumeSpecName: "config-volume") pod "10743313-aad3-47ab-9738-4a36dff338d9" (UID: "10743313-aad3-47ab-9738-4a36dff338d9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 22:30:03 crc kubenswrapper[4727]: I1121 22:30:03.861883 4727 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10743313-aad3-47ab-9738-4a36dff338d9-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 22:30:03 crc kubenswrapper[4727]: I1121 22:30:03.867467 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10743313-aad3-47ab-9738-4a36dff338d9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "10743313-aad3-47ab-9738-4a36dff338d9" (UID: "10743313-aad3-47ab-9738-4a36dff338d9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 22:30:03 crc kubenswrapper[4727]: I1121 22:30:03.870422 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10743313-aad3-47ab-9738-4a36dff338d9-kube-api-access-g7c5d" (OuterVolumeSpecName: "kube-api-access-g7c5d") pod "10743313-aad3-47ab-9738-4a36dff338d9" (UID: "10743313-aad3-47ab-9738-4a36dff338d9"). InnerVolumeSpecName "kube-api-access-g7c5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:30:03 crc kubenswrapper[4727]: I1121 22:30:03.963529 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7c5d\" (UniqueName: \"kubernetes.io/projected/10743313-aad3-47ab-9738-4a36dff338d9-kube-api-access-g7c5d\") on node \"crc\" DevicePath \"\"" Nov 21 22:30:03 crc kubenswrapper[4727]: I1121 22:30:03.963558 4727 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10743313-aad3-47ab-9738-4a36dff338d9-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 22:30:04 crc kubenswrapper[4727]: I1121 22:30:04.317435 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" event={"ID":"10743313-aad3-47ab-9738-4a36dff338d9","Type":"ContainerDied","Data":"aa43b4df3055f8f0b3735eb24842990df21a302f9fd164a5aa0151815b7c14a4"} Nov 21 22:30:04 crc kubenswrapper[4727]: I1121 22:30:04.317481 4727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa43b4df3055f8f0b3735eb24842990df21a302f9fd164a5aa0151815b7c14a4" Nov 21 22:30:04 crc kubenswrapper[4727]: I1121 22:30:04.317507 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396070-rlzxd" Nov 21 22:30:04 crc kubenswrapper[4727]: I1121 22:30:04.371448 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w"] Nov 21 22:30:04 crc kubenswrapper[4727]: I1121 22:30:04.381016 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396025-ndr7w"] Nov 21 22:30:05 crc kubenswrapper[4727]: I1121 22:30:05.511251 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e00909b-88bc-4f3b-8028-55c190b740bd" path="/var/lib/kubelet/pods/6e00909b-88bc-4f3b-8028-55c190b740bd/volumes" Nov 21 22:30:07 crc kubenswrapper[4727]: I1121 22:30:07.499310 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:30:07 crc kubenswrapper[4727]: E1121 22:30:07.500038 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:30:20 crc kubenswrapper[4727]: I1121 22:30:20.500331 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:30:20 crc kubenswrapper[4727]: E1121 22:30:20.501653 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:30:31 crc kubenswrapper[4727]: I1121 22:30:31.498953 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:30:31 crc kubenswrapper[4727]: E1121 22:30:31.499797 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:30:45 crc kubenswrapper[4727]: I1121 22:30:45.498949 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:30:45 crc kubenswrapper[4727]: E1121 22:30:45.500188 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:30:58 crc kubenswrapper[4727]: I1121 22:30:58.500167 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:30:58 crc kubenswrapper[4727]: E1121 22:30:58.501283 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:31:01 crc kubenswrapper[4727]: I1121 22:31:01.183635 4727 scope.go:117] "RemoveContainer" containerID="9a8909f1097418d9de1086b35de7658a234aad962699abcb221978098370194f" Nov 21 22:31:11 crc kubenswrapper[4727]: I1121 22:31:11.499546 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:31:11 crc kubenswrapper[4727]: E1121 22:31:11.500743 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:31:23 crc kubenswrapper[4727]: I1121 22:31:23.499128 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:31:23 crc kubenswrapper[4727]: E1121 22:31:23.499902 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:31:37 crc kubenswrapper[4727]: I1121 22:31:37.500421 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:31:37 crc kubenswrapper[4727]: E1121 22:31:37.501863 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.516122 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tzv94"] Nov 21 22:31:46 crc kubenswrapper[4727]: E1121 22:31:46.517221 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10743313-aad3-47ab-9738-4a36dff338d9" containerName="collect-profiles" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.517266 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="10743313-aad3-47ab-9738-4a36dff338d9" containerName="collect-profiles" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.517598 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="10743313-aad3-47ab-9738-4a36dff338d9" containerName="collect-profiles" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.519703 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.529015 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tzv94"] Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.619588 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-utilities\") pod \"redhat-operators-tzv94\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.620935 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-catalog-content\") pod \"redhat-operators-tzv94\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.621098 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfnlm\" (UniqueName: \"kubernetes.io/projected/70794828-8567-49e3-b42c-d90ef60a1386-kube-api-access-zfnlm\") pod \"redhat-operators-tzv94\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.723668 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-utilities\") pod \"redhat-operators-tzv94\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.724023 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-catalog-content\") pod \"redhat-operators-tzv94\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.724060 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfnlm\" (UniqueName: \"kubernetes.io/projected/70794828-8567-49e3-b42c-d90ef60a1386-kube-api-access-zfnlm\") pod \"redhat-operators-tzv94\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.724178 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-utilities\") pod \"redhat-operators-tzv94\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.724560 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-catalog-content\") pod \"redhat-operators-tzv94\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.751373 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfnlm\" (UniqueName: \"kubernetes.io/projected/70794828-8567-49e3-b42c-d90ef60a1386-kube-api-access-zfnlm\") pod \"redhat-operators-tzv94\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:46 crc kubenswrapper[4727]: I1121 22:31:46.854525 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:47 crc kubenswrapper[4727]: I1121 22:31:47.334522 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tzv94"] Nov 21 22:31:47 crc kubenswrapper[4727]: I1121 22:31:47.511941 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzv94" event={"ID":"70794828-8567-49e3-b42c-d90ef60a1386","Type":"ContainerStarted","Data":"56cf5b3b51788ff64ca2889813be251a4fbadc2aecd0cc495f87af017f18bcf9"} Nov 21 22:31:48 crc kubenswrapper[4727]: I1121 22:31:48.530693 4727 generic.go:334] "Generic (PLEG): container finished" podID="70794828-8567-49e3-b42c-d90ef60a1386" containerID="bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea" exitCode=0 Nov 21 22:31:48 crc kubenswrapper[4727]: I1121 22:31:48.530792 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzv94" event={"ID":"70794828-8567-49e3-b42c-d90ef60a1386","Type":"ContainerDied","Data":"bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea"} Nov 21 22:31:48 crc kubenswrapper[4727]: I1121 22:31:48.538162 4727 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 22:31:50 crc kubenswrapper[4727]: I1121 22:31:50.558893 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzv94" event={"ID":"70794828-8567-49e3-b42c-d90ef60a1386","Type":"ContainerStarted","Data":"3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30"} Nov 21 22:31:52 crc kubenswrapper[4727]: I1121 22:31:52.499851 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:31:52 crc kubenswrapper[4727]: E1121 22:31:52.500682 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:31:54 crc kubenswrapper[4727]: I1121 22:31:54.618833 4727 generic.go:334] "Generic (PLEG): container finished" podID="70794828-8567-49e3-b42c-d90ef60a1386" containerID="3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30" exitCode=0 Nov 21 22:31:54 crc kubenswrapper[4727]: I1121 22:31:54.619097 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzv94" event={"ID":"70794828-8567-49e3-b42c-d90ef60a1386","Type":"ContainerDied","Data":"3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30"} Nov 21 22:31:55 crc kubenswrapper[4727]: I1121 22:31:55.637680 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzv94" event={"ID":"70794828-8567-49e3-b42c-d90ef60a1386","Type":"ContainerStarted","Data":"d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b"} Nov 21 22:31:55 crc kubenswrapper[4727]: I1121 22:31:55.668613 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tzv94" podStartSLOduration=3.146068407 podStartE2EDuration="9.66858821s" podCreationTimestamp="2025-11-21 22:31:46 +0000 UTC" firstStartedPulling="2025-11-21 22:31:48.536888711 +0000 UTC m=+8713.723073775" lastFinishedPulling="2025-11-21 22:31:55.059408494 +0000 UTC m=+8720.245593578" observedRunningTime="2025-11-21 22:31:55.663044746 +0000 UTC m=+8720.849229810" watchObservedRunningTime="2025-11-21 22:31:55.66858821 +0000 UTC m=+8720.854773254" Nov 21 22:31:56 crc kubenswrapper[4727]: I1121 22:31:56.855580 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:56 crc kubenswrapper[4727]: I1121 22:31:56.856676 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:31:57 crc kubenswrapper[4727]: I1121 22:31:57.914649 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzv94" podUID="70794828-8567-49e3-b42c-d90ef60a1386" containerName="registry-server" probeResult="failure" output=< Nov 21 22:31:57 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:31:57 crc kubenswrapper[4727]: > Nov 21 22:32:05 crc kubenswrapper[4727]: I1121 22:32:05.515052 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:32:05 crc kubenswrapper[4727]: E1121 22:32:05.516440 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:32:07 crc kubenswrapper[4727]: I1121 22:32:07.926810 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzv94" podUID="70794828-8567-49e3-b42c-d90ef60a1386" containerName="registry-server" probeResult="failure" output=< Nov 21 22:32:07 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:32:07 crc kubenswrapper[4727]: > Nov 21 22:32:16 crc kubenswrapper[4727]: I1121 22:32:16.955593 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:32:17 crc kubenswrapper[4727]: I1121 22:32:17.026688 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:32:17 crc kubenswrapper[4727]: I1121 22:32:17.499368 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:32:17 crc kubenswrapper[4727]: E1121 22:32:17.500220 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:32:17 crc kubenswrapper[4727]: I1121 22:32:17.718466 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tzv94"] Nov 21 22:32:18 crc kubenswrapper[4727]: I1121 22:32:18.979290 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tzv94" podUID="70794828-8567-49e3-b42c-d90ef60a1386" containerName="registry-server" containerID="cri-o://d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b" gracePeriod=2 Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.550338 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.661810 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-catalog-content\") pod \"70794828-8567-49e3-b42c-d90ef60a1386\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.661942 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfnlm\" (UniqueName: \"kubernetes.io/projected/70794828-8567-49e3-b42c-d90ef60a1386-kube-api-access-zfnlm\") pod \"70794828-8567-49e3-b42c-d90ef60a1386\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.662050 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-utilities\") pod \"70794828-8567-49e3-b42c-d90ef60a1386\" (UID: \"70794828-8567-49e3-b42c-d90ef60a1386\") " Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.671481 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-utilities" (OuterVolumeSpecName: "utilities") pod "70794828-8567-49e3-b42c-d90ef60a1386" (UID: "70794828-8567-49e3-b42c-d90ef60a1386"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.706217 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70794828-8567-49e3-b42c-d90ef60a1386-kube-api-access-zfnlm" (OuterVolumeSpecName: "kube-api-access-zfnlm") pod "70794828-8567-49e3-b42c-d90ef60a1386" (UID: "70794828-8567-49e3-b42c-d90ef60a1386"). InnerVolumeSpecName "kube-api-access-zfnlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.764836 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfnlm\" (UniqueName: \"kubernetes.io/projected/70794828-8567-49e3-b42c-d90ef60a1386-kube-api-access-zfnlm\") on node \"crc\" DevicePath \"\"" Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.764878 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.777137 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70794828-8567-49e3-b42c-d90ef60a1386" (UID: "70794828-8567-49e3-b42c-d90ef60a1386"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.867018 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70794828-8567-49e3-b42c-d90ef60a1386-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.995724 4727 generic.go:334] "Generic (PLEG): container finished" podID="70794828-8567-49e3-b42c-d90ef60a1386" containerID="d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b" exitCode=0 Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.995783 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzv94" event={"ID":"70794828-8567-49e3-b42c-d90ef60a1386","Type":"ContainerDied","Data":"d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b"} Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.995835 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzv94" event={"ID":"70794828-8567-49e3-b42c-d90ef60a1386","Type":"ContainerDied","Data":"56cf5b3b51788ff64ca2889813be251a4fbadc2aecd0cc495f87af017f18bcf9"} Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.995865 4727 scope.go:117] "RemoveContainer" containerID="d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b" Nov 21 22:32:19 crc kubenswrapper[4727]: I1121 22:32:19.995790 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzv94" Nov 21 22:32:20 crc kubenswrapper[4727]: I1121 22:32:20.030984 4727 scope.go:117] "RemoveContainer" containerID="3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30" Nov 21 22:32:20 crc kubenswrapper[4727]: I1121 22:32:20.041707 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tzv94"] Nov 21 22:32:20 crc kubenswrapper[4727]: I1121 22:32:20.049978 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tzv94"] Nov 21 22:32:20 crc kubenswrapper[4727]: I1121 22:32:20.072012 4727 scope.go:117] "RemoveContainer" containerID="bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea" Nov 21 22:32:20 crc kubenswrapper[4727]: I1121 22:32:20.137573 4727 scope.go:117] "RemoveContainer" containerID="d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b" Nov 21 22:32:20 crc kubenswrapper[4727]: E1121 22:32:20.138065 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b\": container with ID starting with d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b not found: ID does not exist" containerID="d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b" Nov 21 22:32:20 crc kubenswrapper[4727]: I1121 22:32:20.138130 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b"} err="failed to get container status \"d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b\": rpc error: code = NotFound desc = could not find container \"d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b\": container with ID starting with d6035cddcf8384a3164fac38c635cdb0dd29bb6b5b64f88feb5f27114107187b not found: ID does not exist" Nov 21 22:32:20 crc kubenswrapper[4727]: I1121 22:32:20.138170 4727 scope.go:117] "RemoveContainer" containerID="3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30" Nov 21 22:32:20 crc kubenswrapper[4727]: E1121 22:32:20.138597 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30\": container with ID starting with 3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30 not found: ID does not exist" containerID="3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30" Nov 21 22:32:20 crc kubenswrapper[4727]: I1121 22:32:20.138638 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30"} err="failed to get container status \"3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30\": rpc error: code = NotFound desc = could not find container \"3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30\": container with ID starting with 3c9ef4b2127dcdb0aba316dc4cfdd4e0b63db892d6d92d902709828d0935cc30 not found: ID does not exist" Nov 21 22:32:20 crc kubenswrapper[4727]: I1121 22:32:20.138667 4727 scope.go:117] "RemoveContainer" containerID="bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea" Nov 21 22:32:20 crc kubenswrapper[4727]: E1121 22:32:20.138953 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea\": container with ID starting with bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea not found: ID does not exist" containerID="bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea" Nov 21 22:32:20 crc kubenswrapper[4727]: I1121 22:32:20.139017 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea"} err="failed to get container status \"bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea\": rpc error: code = NotFound desc = could not find container \"bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea\": container with ID starting with bfcc0d0bc8ce3e50b2603e61bd3bdc5c9aaf8d7aa328b722ae7a458ec761f4ea not found: ID does not exist" Nov 21 22:32:21 crc kubenswrapper[4727]: I1121 22:32:21.521829 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70794828-8567-49e3-b42c-d90ef60a1386" path="/var/lib/kubelet/pods/70794828-8567-49e3-b42c-d90ef60a1386/volumes" Nov 21 22:32:32 crc kubenswrapper[4727]: I1121 22:32:32.499573 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:32:32 crc kubenswrapper[4727]: E1121 22:32:32.500433 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:32:47 crc kubenswrapper[4727]: I1121 22:32:47.500082 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:32:47 crc kubenswrapper[4727]: E1121 22:32:47.500938 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:33:02 crc kubenswrapper[4727]: I1121 22:33:02.499708 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:33:02 crc kubenswrapper[4727]: E1121 22:33:02.500480 4727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5k2kk_openshift-machine-config-operator(b58aef8f-f223-47d8-a2e6-4a80aeeeec42)\"" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" podUID="b58aef8f-f223-47d8-a2e6-4a80aeeeec42" Nov 21 22:33:15 crc kubenswrapper[4727]: I1121 22:33:15.509760 4727 scope.go:117] "RemoveContainer" containerID="e5d7807170a8939e14e364896b9ac79e86b93a2c36f5e0710022b8ae4800da7a" Nov 21 22:33:16 crc kubenswrapper[4727]: I1121 22:33:16.766929 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5k2kk" event={"ID":"b58aef8f-f223-47d8-a2e6-4a80aeeeec42","Type":"ContainerStarted","Data":"f1420e7640a5d3e51dcbb46d9631504d51526dc50673d566fbfce49003138cee"} Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.257843 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-552f6"] Nov 21 22:33:35 crc kubenswrapper[4727]: E1121 22:33:35.259248 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70794828-8567-49e3-b42c-d90ef60a1386" containerName="registry-server" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.259269 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70794828-8567-49e3-b42c-d90ef60a1386" containerName="registry-server" Nov 21 22:33:35 crc kubenswrapper[4727]: E1121 22:33:35.259313 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70794828-8567-49e3-b42c-d90ef60a1386" containerName="extract-utilities" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.259323 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70794828-8567-49e3-b42c-d90ef60a1386" containerName="extract-utilities" Nov 21 22:33:35 crc kubenswrapper[4727]: E1121 22:33:35.259362 4727 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70794828-8567-49e3-b42c-d90ef60a1386" containerName="extract-content" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.259372 4727 state_mem.go:107] "Deleted CPUSet assignment" podUID="70794828-8567-49e3-b42c-d90ef60a1386" containerName="extract-content" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.259684 4727 memory_manager.go:354] "RemoveStaleState removing state" podUID="70794828-8567-49e3-b42c-d90ef60a1386" containerName="registry-server" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.262467 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.273983 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-552f6"] Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.405350 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-catalog-content\") pod \"certified-operators-552f6\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.405837 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvkl6\" (UniqueName: \"kubernetes.io/projected/b9a7578b-101d-4890-b08c-99579eca249c-kube-api-access-pvkl6\") pod \"certified-operators-552f6\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.405881 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-utilities\") pod \"certified-operators-552f6\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.508246 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-catalog-content\") pod \"certified-operators-552f6\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.508299 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvkl6\" (UniqueName: \"kubernetes.io/projected/b9a7578b-101d-4890-b08c-99579eca249c-kube-api-access-pvkl6\") pod \"certified-operators-552f6\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.508340 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-utilities\") pod \"certified-operators-552f6\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.509191 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-utilities\") pod \"certified-operators-552f6\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.509187 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-catalog-content\") pod \"certified-operators-552f6\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.531015 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvkl6\" (UniqueName: \"kubernetes.io/projected/b9a7578b-101d-4890-b08c-99579eca249c-kube-api-access-pvkl6\") pod \"certified-operators-552f6\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:35 crc kubenswrapper[4727]: I1121 22:33:35.596535 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:36 crc kubenswrapper[4727]: I1121 22:33:36.131483 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-552f6"] Nov 21 22:33:36 crc kubenswrapper[4727]: W1121 22:33:36.134803 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9a7578b_101d_4890_b08c_99579eca249c.slice/crio-2f5e26dc81df60dd433e34e9fb9666a4f8e5f2d058a2fd9789bc882729df1708 WatchSource:0}: Error finding container 2f5e26dc81df60dd433e34e9fb9666a4f8e5f2d058a2fd9789bc882729df1708: Status 404 returned error can't find the container with id 2f5e26dc81df60dd433e34e9fb9666a4f8e5f2d058a2fd9789bc882729df1708 Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.054566 4727 generic.go:334] "Generic (PLEG): container finished" podID="b9a7578b-101d-4890-b08c-99579eca249c" containerID="496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4" exitCode=0 Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.055244 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-552f6" event={"ID":"b9a7578b-101d-4890-b08c-99579eca249c","Type":"ContainerDied","Data":"496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4"} Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.055279 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-552f6" event={"ID":"b9a7578b-101d-4890-b08c-99579eca249c","Type":"ContainerStarted","Data":"2f5e26dc81df60dd433e34e9fb9666a4f8e5f2d058a2fd9789bc882729df1708"} Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.072013 4727 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-grs9h"] Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.077875 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.110205 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grs9h"] Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.152187 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w5kv\" (UniqueName: \"kubernetes.io/projected/273e7746-129e-4071-9331-dacf3c8f53dd-kube-api-access-9w5kv\") pod \"redhat-marketplace-grs9h\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.152843 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-catalog-content\") pod \"redhat-marketplace-grs9h\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.153101 4727 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-utilities\") pod \"redhat-marketplace-grs9h\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.255681 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-utilities\") pod \"redhat-marketplace-grs9h\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.256103 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w5kv\" (UniqueName: \"kubernetes.io/projected/273e7746-129e-4071-9331-dacf3c8f53dd-kube-api-access-9w5kv\") pod \"redhat-marketplace-grs9h\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.256222 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-utilities\") pod \"redhat-marketplace-grs9h\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.256436 4727 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-catalog-content\") pod \"redhat-marketplace-grs9h\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.256714 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-catalog-content\") pod \"redhat-marketplace-grs9h\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.284122 4727 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w5kv\" (UniqueName: \"kubernetes.io/projected/273e7746-129e-4071-9331-dacf3c8f53dd-kube-api-access-9w5kv\") pod \"redhat-marketplace-grs9h\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.401457 4727 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:37 crc kubenswrapper[4727]: I1121 22:33:37.886238 4727 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grs9h"] Nov 21 22:33:37 crc kubenswrapper[4727]: W1121 22:33:37.893187 4727 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod273e7746_129e_4071_9331_dacf3c8f53dd.slice/crio-410f58654c4f8191fc437001c71a51dbd9ab8229f833ce7f9a3818db90035631 WatchSource:0}: Error finding container 410f58654c4f8191fc437001c71a51dbd9ab8229f833ce7f9a3818db90035631: Status 404 returned error can't find the container with id 410f58654c4f8191fc437001c71a51dbd9ab8229f833ce7f9a3818db90035631 Nov 21 22:33:38 crc kubenswrapper[4727]: I1121 22:33:38.070911 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grs9h" event={"ID":"273e7746-129e-4071-9331-dacf3c8f53dd","Type":"ContainerStarted","Data":"081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91"} Nov 21 22:33:38 crc kubenswrapper[4727]: I1121 22:33:38.070994 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grs9h" event={"ID":"273e7746-129e-4071-9331-dacf3c8f53dd","Type":"ContainerStarted","Data":"410f58654c4f8191fc437001c71a51dbd9ab8229f833ce7f9a3818db90035631"} Nov 21 22:33:38 crc kubenswrapper[4727]: I1121 22:33:38.074352 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-552f6" event={"ID":"b9a7578b-101d-4890-b08c-99579eca249c","Type":"ContainerStarted","Data":"a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749"} Nov 21 22:33:39 crc kubenswrapper[4727]: I1121 22:33:39.096231 4727 generic.go:334] "Generic (PLEG): container finished" podID="273e7746-129e-4071-9331-dacf3c8f53dd" containerID="081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91" exitCode=0 Nov 21 22:33:39 crc kubenswrapper[4727]: I1121 22:33:39.096565 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grs9h" event={"ID":"273e7746-129e-4071-9331-dacf3c8f53dd","Type":"ContainerDied","Data":"081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91"} Nov 21 22:33:40 crc kubenswrapper[4727]: I1121 22:33:40.121619 4727 generic.go:334] "Generic (PLEG): container finished" podID="b9a7578b-101d-4890-b08c-99579eca249c" containerID="a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749" exitCode=0 Nov 21 22:33:40 crc kubenswrapper[4727]: I1121 22:33:40.121610 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-552f6" event={"ID":"b9a7578b-101d-4890-b08c-99579eca249c","Type":"ContainerDied","Data":"a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749"} Nov 21 22:33:41 crc kubenswrapper[4727]: I1121 22:33:41.137828 4727 generic.go:334] "Generic (PLEG): container finished" podID="273e7746-129e-4071-9331-dacf3c8f53dd" containerID="9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32" exitCode=0 Nov 21 22:33:41 crc kubenswrapper[4727]: I1121 22:33:41.137924 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grs9h" event={"ID":"273e7746-129e-4071-9331-dacf3c8f53dd","Type":"ContainerDied","Data":"9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32"} Nov 21 22:33:42 crc kubenswrapper[4727]: I1121 22:33:42.152326 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grs9h" event={"ID":"273e7746-129e-4071-9331-dacf3c8f53dd","Type":"ContainerStarted","Data":"18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835"} Nov 21 22:33:42 crc kubenswrapper[4727]: I1121 22:33:42.154781 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-552f6" event={"ID":"b9a7578b-101d-4890-b08c-99579eca249c","Type":"ContainerStarted","Data":"72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f"} Nov 21 22:33:42 crc kubenswrapper[4727]: I1121 22:33:42.182997 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-grs9h" podStartSLOduration=2.7583686050000003 podStartE2EDuration="5.182979556s" podCreationTimestamp="2025-11-21 22:33:37 +0000 UTC" firstStartedPulling="2025-11-21 22:33:39.09994241 +0000 UTC m=+8824.286127454" lastFinishedPulling="2025-11-21 22:33:41.524553351 +0000 UTC m=+8826.710738405" observedRunningTime="2025-11-21 22:33:42.169037671 +0000 UTC m=+8827.355222715" watchObservedRunningTime="2025-11-21 22:33:42.182979556 +0000 UTC m=+8827.369164600" Nov 21 22:33:42 crc kubenswrapper[4727]: I1121 22:33:42.193026 4727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-552f6" podStartSLOduration=3.287255429 podStartE2EDuration="7.193008257s" podCreationTimestamp="2025-11-21 22:33:35 +0000 UTC" firstStartedPulling="2025-11-21 22:33:37.057450702 +0000 UTC m=+8822.243635746" lastFinishedPulling="2025-11-21 22:33:40.96320351 +0000 UTC m=+8826.149388574" observedRunningTime="2025-11-21 22:33:42.187291419 +0000 UTC m=+8827.373476473" watchObservedRunningTime="2025-11-21 22:33:42.193008257 +0000 UTC m=+8827.379193311" Nov 21 22:33:45 crc kubenswrapper[4727]: I1121 22:33:45.596702 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:45 crc kubenswrapper[4727]: I1121 22:33:45.597223 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:46 crc kubenswrapper[4727]: I1121 22:33:46.691373 4727 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-552f6" podUID="b9a7578b-101d-4890-b08c-99579eca249c" containerName="registry-server" probeResult="failure" output=< Nov 21 22:33:46 crc kubenswrapper[4727]: timeout: failed to connect service ":50051" within 1s Nov 21 22:33:46 crc kubenswrapper[4727]: > Nov 21 22:33:47 crc kubenswrapper[4727]: I1121 22:33:47.402179 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:47 crc kubenswrapper[4727]: I1121 22:33:47.402227 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:47 crc kubenswrapper[4727]: I1121 22:33:47.472200 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:48 crc kubenswrapper[4727]: I1121 22:33:48.320723 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:48 crc kubenswrapper[4727]: I1121 22:33:48.381579 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grs9h"] Nov 21 22:33:50 crc kubenswrapper[4727]: I1121 22:33:50.262705 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-grs9h" podUID="273e7746-129e-4071-9331-dacf3c8f53dd" containerName="registry-server" containerID="cri-o://18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835" gracePeriod=2 Nov 21 22:33:50 crc kubenswrapper[4727]: I1121 22:33:50.862713 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:50 crc kubenswrapper[4727]: I1121 22:33:50.926180 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w5kv\" (UniqueName: \"kubernetes.io/projected/273e7746-129e-4071-9331-dacf3c8f53dd-kube-api-access-9w5kv\") pod \"273e7746-129e-4071-9331-dacf3c8f53dd\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " Nov 21 22:33:50 crc kubenswrapper[4727]: I1121 22:33:50.926274 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-catalog-content\") pod \"273e7746-129e-4071-9331-dacf3c8f53dd\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " Nov 21 22:33:50 crc kubenswrapper[4727]: I1121 22:33:50.926429 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-utilities\") pod \"273e7746-129e-4071-9331-dacf3c8f53dd\" (UID: \"273e7746-129e-4071-9331-dacf3c8f53dd\") " Nov 21 22:33:50 crc kubenswrapper[4727]: I1121 22:33:50.927881 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-utilities" (OuterVolumeSpecName: "utilities") pod "273e7746-129e-4071-9331-dacf3c8f53dd" (UID: "273e7746-129e-4071-9331-dacf3c8f53dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:33:50 crc kubenswrapper[4727]: I1121 22:33:50.935336 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273e7746-129e-4071-9331-dacf3c8f53dd-kube-api-access-9w5kv" (OuterVolumeSpecName: "kube-api-access-9w5kv") pod "273e7746-129e-4071-9331-dacf3c8f53dd" (UID: "273e7746-129e-4071-9331-dacf3c8f53dd"). InnerVolumeSpecName "kube-api-access-9w5kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:33:50 crc kubenswrapper[4727]: I1121 22:33:50.950673 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "273e7746-129e-4071-9331-dacf3c8f53dd" (UID: "273e7746-129e-4071-9331-dacf3c8f53dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.029491 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.029812 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w5kv\" (UniqueName: \"kubernetes.io/projected/273e7746-129e-4071-9331-dacf3c8f53dd-kube-api-access-9w5kv\") on node \"crc\" DevicePath \"\"" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.029828 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273e7746-129e-4071-9331-dacf3c8f53dd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.286462 4727 generic.go:334] "Generic (PLEG): container finished" podID="273e7746-129e-4071-9331-dacf3c8f53dd" containerID="18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835" exitCode=0 Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.286552 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grs9h" event={"ID":"273e7746-129e-4071-9331-dacf3c8f53dd","Type":"ContainerDied","Data":"18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835"} Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.286605 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grs9h" event={"ID":"273e7746-129e-4071-9331-dacf3c8f53dd","Type":"ContainerDied","Data":"410f58654c4f8191fc437001c71a51dbd9ab8229f833ce7f9a3818db90035631"} Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.286649 4727 scope.go:117] "RemoveContainer" containerID="18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.286692 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grs9h" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.339171 4727 scope.go:117] "RemoveContainer" containerID="9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.356868 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grs9h"] Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.363861 4727 scope.go:117] "RemoveContainer" containerID="081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.370330 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-grs9h"] Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.447993 4727 scope.go:117] "RemoveContainer" containerID="18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835" Nov 21 22:33:51 crc kubenswrapper[4727]: E1121 22:33:51.448609 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835\": container with ID starting with 18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835 not found: ID does not exist" containerID="18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.448650 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835"} err="failed to get container status \"18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835\": rpc error: code = NotFound desc = could not find container \"18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835\": container with ID starting with 18159b0c67624190725003292086e489e032f5c0685ae7fd5f94d86520566835 not found: ID does not exist" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.448675 4727 scope.go:117] "RemoveContainer" containerID="9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32" Nov 21 22:33:51 crc kubenswrapper[4727]: E1121 22:33:51.449153 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32\": container with ID starting with 9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32 not found: ID does not exist" containerID="9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.449199 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32"} err="failed to get container status \"9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32\": rpc error: code = NotFound desc = could not find container \"9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32\": container with ID starting with 9c1353853a9ae7d7eda1dadd73f13ce8577dfefe9fc136c076d7d55b2b17ec32 not found: ID does not exist" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.449225 4727 scope.go:117] "RemoveContainer" containerID="081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91" Nov 21 22:33:51 crc kubenswrapper[4727]: E1121 22:33:51.449619 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91\": container with ID starting with 081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91 not found: ID does not exist" containerID="081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.449647 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91"} err="failed to get container status \"081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91\": rpc error: code = NotFound desc = could not find container \"081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91\": container with ID starting with 081ef5b1bd3174b4f556596becaea7acd197dedda28fd1d5c388802e236aed91 not found: ID does not exist" Nov 21 22:33:51 crc kubenswrapper[4727]: I1121 22:33:51.521206 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="273e7746-129e-4071-9331-dacf3c8f53dd" path="/var/lib/kubelet/pods/273e7746-129e-4071-9331-dacf3c8f53dd/volumes" Nov 21 22:33:55 crc kubenswrapper[4727]: I1121 22:33:55.691460 4727 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:55 crc kubenswrapper[4727]: I1121 22:33:55.786201 4727 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:55 crc kubenswrapper[4727]: I1121 22:33:55.964405 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-552f6"] Nov 21 22:33:57 crc kubenswrapper[4727]: I1121 22:33:57.377919 4727 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-552f6" podUID="b9a7578b-101d-4890-b08c-99579eca249c" containerName="registry-server" containerID="cri-o://72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f" gracePeriod=2 Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.005849 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.020263 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-catalog-content\") pod \"b9a7578b-101d-4890-b08c-99579eca249c\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.020337 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-utilities\") pod \"b9a7578b-101d-4890-b08c-99579eca249c\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.021119 4727 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvkl6\" (UniqueName: \"kubernetes.io/projected/b9a7578b-101d-4890-b08c-99579eca249c-kube-api-access-pvkl6\") pod \"b9a7578b-101d-4890-b08c-99579eca249c\" (UID: \"b9a7578b-101d-4890-b08c-99579eca249c\") " Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.021055 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-utilities" (OuterVolumeSpecName: "utilities") pod "b9a7578b-101d-4890-b08c-99579eca249c" (UID: "b9a7578b-101d-4890-b08c-99579eca249c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.022752 4727 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.028493 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9a7578b-101d-4890-b08c-99579eca249c-kube-api-access-pvkl6" (OuterVolumeSpecName: "kube-api-access-pvkl6") pod "b9a7578b-101d-4890-b08c-99579eca249c" (UID: "b9a7578b-101d-4890-b08c-99579eca249c"). InnerVolumeSpecName "kube-api-access-pvkl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.063783 4727 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9a7578b-101d-4890-b08c-99579eca249c" (UID: "b9a7578b-101d-4890-b08c-99579eca249c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.124581 4727 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9a7578b-101d-4890-b08c-99579eca249c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.124614 4727 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvkl6\" (UniqueName: \"kubernetes.io/projected/b9a7578b-101d-4890-b08c-99579eca249c-kube-api-access-pvkl6\") on node \"crc\" DevicePath \"\"" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.395034 4727 generic.go:334] "Generic (PLEG): container finished" podID="b9a7578b-101d-4890-b08c-99579eca249c" containerID="72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f" exitCode=0 Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.395110 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-552f6" event={"ID":"b9a7578b-101d-4890-b08c-99579eca249c","Type":"ContainerDied","Data":"72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f"} Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.395150 4727 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-552f6" event={"ID":"b9a7578b-101d-4890-b08c-99579eca249c","Type":"ContainerDied","Data":"2f5e26dc81df60dd433e34e9fb9666a4f8e5f2d058a2fd9789bc882729df1708"} Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.395148 4727 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-552f6" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.395177 4727 scope.go:117] "RemoveContainer" containerID="72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.426260 4727 scope.go:117] "RemoveContainer" containerID="a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.459696 4727 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-552f6"] Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.472079 4727 scope.go:117] "RemoveContainer" containerID="496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.481859 4727 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-552f6"] Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.544842 4727 scope.go:117] "RemoveContainer" containerID="72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f" Nov 21 22:33:58 crc kubenswrapper[4727]: E1121 22:33:58.545374 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f\": container with ID starting with 72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f not found: ID does not exist" containerID="72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.545445 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f"} err="failed to get container status \"72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f\": rpc error: code = NotFound desc = could not find container \"72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f\": container with ID starting with 72537c1b8dbfd515215a8d15f0ef038450916e86021097c5fea6fe2f0222757f not found: ID does not exist" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.545474 4727 scope.go:117] "RemoveContainer" containerID="a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749" Nov 21 22:33:58 crc kubenswrapper[4727]: E1121 22:33:58.545914 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749\": container with ID starting with a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749 not found: ID does not exist" containerID="a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.546099 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749"} err="failed to get container status \"a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749\": rpc error: code = NotFound desc = could not find container \"a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749\": container with ID starting with a89095d2e175c329cf8bedc95cacd55de3b18ace639e09a0e0f4ab3ca2267749 not found: ID does not exist" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.546225 4727 scope.go:117] "RemoveContainer" containerID="496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4" Nov 21 22:33:58 crc kubenswrapper[4727]: E1121 22:33:58.546718 4727 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4\": container with ID starting with 496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4 not found: ID does not exist" containerID="496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4" Nov 21 22:33:58 crc kubenswrapper[4727]: I1121 22:33:58.546780 4727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4"} err="failed to get container status \"496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4\": rpc error: code = NotFound desc = could not find container \"496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4\": container with ID starting with 496ef1f019d49db02331d0072d9bdf58b7d0229cdb714af50f3c75792aef32e4 not found: ID does not exist" Nov 21 22:33:59 crc kubenswrapper[4727]: I1121 22:33:59.515419 4727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9a7578b-101d-4890-b08c-99579eca249c" path="/var/lib/kubelet/pods/b9a7578b-101d-4890-b08c-99579eca249c/volumes"